Compare commits

..

19 Commits

Author SHA1 Message Date
Andrei Kvapil
5b2da61a74 Enable versioning for cozy-* charts
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 12:33:36 +02:00
Andrei Kvapil
d5eb4dd62e Move flux to core package and avoid Helm installation (#61)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 12:31:42 +02:00
Andrei Kvapil
97cf386fc6 Merge pull request #59 from aenix-io/fix-cilium
fix cilium installation
2024-04-04 12:31:05 +02:00
Andrei Kvapil
a3a049ce6a fix cilium for full-distro bundle
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 04:50:16 +02:00
Andrei Kvapil
9b47df4407 Revert cilium to v1.14 2024-04-04 04:11:26 +02:00
Andrei Kvapil
39667d69f1 fix: cilium installation 2024-04-04 03:35:42 +02:00
Andrei Kvapil
0d36f3ee6c fix: full-distro bundle installation (#58) 2024-04-03 09:01:36 +02:00
Andrei Kvapil
34b9676971 fix: tolerate node.cilium.io/agent-not-ready (#56) 2024-04-02 08:53:53 +02:00
Andrei Kvapil
2e3314b2dd fix: chicken and egg problem (#57) 2024-04-02 08:53:34 +02:00
Andrei Kvapil
c58db33712 fix: Automatically build helm charts when building cozystack image (#55) 2024-04-02 08:53:13 +02:00
Andrei Kvapil
33bc23cfca Introduce bundles (#53)
* bundles

* Allow overriding values by prividng values-<release>: <json|yaml> in cozystack-config

* match bundle-name from cozystack-config

* add extra bundles
2024-04-01 17:42:51 +02:00
Andrei Kvapil
c5ead1932f mariadb-operator v0.27.0 (#51) 2024-04-01 17:42:33 +02:00
Andrei Kvapil
a7d12c1430 update kubeapps and flux (#50)
* Update fluxcd 2.2.3

* Update kubeapps 14.7.2
2024-04-01 17:42:22 +02:00
Timur Tukaev
5e1380df76 Update README.md (#49)
Fix link to cozystack website
2024-03-23 22:00:44 +01:00
Andrei Kvapil
03fab7a831 Update Cilium v1.14.5 (#47) 2024-03-15 22:01:30 +01:00
Andrei Kvapil
e17dcaa65e Update CNPG to 1.22.2 (#46)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:36 +01:00
Andrei Kvapil
85d4ed251d Update piraeus-operator and LINSTOR v2.4.1 (#45) 2024-03-15 21:15:27 +01:00
Andrei Kvapil
f1c01a0fe8 Add link to roadmap (#41)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:17 +01:00
Andrei Kvapil
2cff181279 Preapre release v0.2.0 (#38)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:06 +01:00
235 changed files with 45920 additions and 7109 deletions

View File

@@ -33,7 +33,7 @@ You can use Cozystack as Kubernetes distribution for Bare Metal
## Documentation ## Documentation
The documentation is located on official [cozystack.io](cozystack.io) website. The documentation is located on official [cozystack.io](https://cozystack.io) website.
Read [Get Started](https://cozystack.io/docs/get-started/) section for a quick start. Read [Get Started](https://cozystack.io/docs/get-started/) section for a quick start.
@@ -44,6 +44,8 @@ If you encounter any difficulties, start with the [troubleshooting guide](https:
Versioning adheres to the [Semantic Versioning](http://semver.org/) principles. Versioning adheres to the [Semantic Versioning](http://semver.org/) principles.
A full list of the available releases is available in the GitHub repository's [Release](https://github.com/aenix-io/cozystack/releases) section. A full list of the available releases is available in the GitHub repository's [Release](https://github.com/aenix-io/cozystack/releases) section.
- [Roadmap](https://github.com/orgs/aenix-io/projects/2)
## Contributions ## Contributions
Contributions are highly appreciated and very welcomed! Contributions are highly appreciated and very welcomed!

View File

@@ -3,7 +3,7 @@ set -e
if [ -e $1 ]; then if [ -e $1 ]; then
echo "Please pass version in the first argument" echo "Please pass version in the first argument"
echo "Example: $0 v0.0.2" echo "Example: $0 0.2.0"
exit 1 exit 1
fi fi
@@ -12,8 +12,14 @@ talos_version=$(awk '/^version:/ {print $2}' packages/core/installer/images/talo
set -x set -x
sed -i "/^TAG / s|=.*|= ${version}|" \ sed -i "/^TAG / s|=.*|= v${version}|" \
packages/apps/http-cache/Makefile \ packages/apps/http-cache/Makefile \
packages/apps/kubernetes/Makefile \ packages/apps/kubernetes/Makefile \
packages/core/installer/Makefile \ packages/core/installer/Makefile \
packages/system/dashboard/Makefile packages/system/dashboard/Makefile
sed -i "/^VERSION / s|=.*|= ${version}|" \
packages/core/Makefile \
packages/system/Makefile
make -C packages/core fix-chartnames
make -C packages/system fix-chartnames

View File

@@ -102,3 +102,6 @@ spec:
- key: "node.kubernetes.io/not-ready" - key: "node.kubernetes.io/not-ready"
operator: "Exists" operator: "Exists"
effect: "NoSchedule" effect: "NoSchedule"
- key: "node.cilium.io/agent-not-ready"
operator: "Exists"
effect: "NoSchedule"

View File

@@ -2,7 +2,7 @@ PUSH := 1
LOAD := 0 LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack REGISTRY := ghcr.io/aenix-io/cozystack
NGINX_CACHE_TAG = v0.1.0 NGINX_CACHE_TAG = v0.1.0
TAG := v0.1.0 TAG := v0.2.0
image: image-nginx image: image-nginx

View File

@@ -1,7 +1,7 @@
PUSH := 1 PUSH := 1
LOAD := 0 LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.1.0 TAG := v0.2.0
UBUNTU_CONTAINER_DISK_TAG = v1.29.1 UBUNTU_CONTAINER_DISK_TAG = v1.29.1
image: image-ubuntu-container-disk image: image-ubuntu-container-disk

View File

@@ -1,4 +1,6 @@
VERSION := 0.2.0
gen: fix-chartnames gen: fix-chartnames
fix-chartnames: fix-chartnames:
find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do printf "name: cozy-%s\nversion: 1.0.0\n" "$$i" > "$$i/Chart.yaml"; done find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do printf "name: cozy-%s\nversion: $(VERSION)\n" "$$i" > "$$i/Chart.yaml"; done

View File

@@ -1,2 +1,2 @@
name: cozy-fluxcd name: cozy-fluxcd
version: 1.0.0 version: 0.2.0

View File

@@ -0,0 +1,13 @@
NAMESPACE=cozy-fluxcd
NAME=fluxcd
API_VERSIONS_FLAGS=$(addprefix -a ,$(shell kubectl api-versions))
show:
helm template -n $(NAMESPACE) $(NAME) . --no-hooks --dry-run=server $(API_VERSIONS_FLAGS)
apply:
helm template -n $(NAMESPACE) $(NAME) . --no-hooks --dry-run=server $(API_VERSIONS_FLAGS) | kubectl apply -n $(NAMESPACE) -f-
diff:
helm template -n $(NAMESPACE) $(NAME) . --no-hooks --dry-run=server $(API_VERSIONS_FLAGS) | kubectl diff -n $(NAMESPACE) -f-

View File

@@ -1,11 +1,11 @@
annotations: annotations:
artifacthub.io/changes: | artifacthub.io/changes: |
- "feat: adding CRD and RBAC annotation option" - "[Chore]: Update App Version to upstream 2.2.3"
apiVersion: v2 apiVersion: v2
appVersion: 2.1.2 appVersion: 2.2.3
description: A Helm chart for flux2 description: A Helm chart for flux2
name: flux2 name: flux2
sources: sources:
- https://github.com/fluxcd-community/helm-charts - https://github.com/fluxcd-community/helm-charts
type: application type: application
version: 2.11.1 version: 2.12.4

View File

@@ -1,6 +1,6 @@
# flux2 # flux2
![Version: 2.11.0](https://img.shields.io/badge/Version-2.11.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 2.1.2](https://img.shields.io/badge/AppVersion-2.1.2-informational?style=flat-square) ![Version: 2.12.4](https://img.shields.io/badge/Version-2.12.4-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 2.2.3](https://img.shields.io/badge/AppVersion-2.2.3-informational?style=flat-square)
A Helm chart for flux2 A Helm chart for flux2
@@ -19,7 +19,7 @@ This helm chart is maintained and released by the fluxcd-community on a best eff
| cli.image | string | `"ghcr.io/fluxcd/flux-cli"` | | | cli.image | string | `"ghcr.io/fluxcd/flux-cli"` | |
| cli.nodeSelector | object | `{}` | | | cli.nodeSelector | object | `{}` | |
| cli.serviceAccount.automount | bool | `true` | | | cli.serviceAccount.automount | bool | `true` | |
| cli.tag | string | `"v2.1.2"` | | | cli.tag | string | `"v2.2.3"` | |
| cli.tolerations | list | `[]` | | | cli.tolerations | list | `[]` | |
| clusterDomain | string | `"cluster.local"` | | | clusterDomain | string | `"cluster.local"` | |
| crds.annotations | object | `{}` | Add annotations to all CRD resources, e.g. "helm.sh/resource-policy": keep | | crds.annotations | object | `{}` | Add annotations to all CRD resources, e.g. "helm.sh/resource-policy": keep |
@@ -41,7 +41,7 @@ This helm chart is maintained and released by the fluxcd-community on a best eff
| helmController.serviceAccount.annotations | object | `{}` | | | helmController.serviceAccount.annotations | object | `{}` | |
| helmController.serviceAccount.automount | bool | `true` | | | helmController.serviceAccount.automount | bool | `true` | |
| helmController.serviceAccount.create | bool | `true` | | | helmController.serviceAccount.create | bool | `true` | |
| helmController.tag | string | `"v0.36.2"` | | | helmController.tag | string | `"v0.37.4"` | |
| helmController.tolerations | list | `[]` | | | helmController.tolerations | list | `[]` | |
| imageAutomationController.affinity | object | `{}` | | | imageAutomationController.affinity | object | `{}` | |
| imageAutomationController.annotations."prometheus.io/port" | string | `"8080"` | | | imageAutomationController.annotations."prometheus.io/port" | string | `"8080"` | |
@@ -60,7 +60,7 @@ This helm chart is maintained and released by the fluxcd-community on a best eff
| imageAutomationController.serviceAccount.annotations | object | `{}` | | | imageAutomationController.serviceAccount.annotations | object | `{}` | |
| imageAutomationController.serviceAccount.automount | bool | `true` | | | imageAutomationController.serviceAccount.automount | bool | `true` | |
| imageAutomationController.serviceAccount.create | bool | `true` | | | imageAutomationController.serviceAccount.create | bool | `true` | |
| imageAutomationController.tag | string | `"v0.36.1"` | | | imageAutomationController.tag | string | `"v0.37.1"` | |
| imageAutomationController.tolerations | list | `[]` | | | imageAutomationController.tolerations | list | `[]` | |
| imagePullSecrets | list | `[]` | contents of pod imagePullSecret in form 'name=[secretName]'; applied to all controllers | | imagePullSecrets | list | `[]` | contents of pod imagePullSecret in form 'name=[secretName]'; applied to all controllers |
| imageReflectionController.affinity | object | `{}` | | | imageReflectionController.affinity | object | `{}` | |
@@ -80,7 +80,7 @@ This helm chart is maintained and released by the fluxcd-community on a best eff
| imageReflectionController.serviceAccount.annotations | object | `{}` | | | imageReflectionController.serviceAccount.annotations | object | `{}` | |
| imageReflectionController.serviceAccount.automount | bool | `true` | | | imageReflectionController.serviceAccount.automount | bool | `true` | |
| imageReflectionController.serviceAccount.create | bool | `true` | | | imageReflectionController.serviceAccount.create | bool | `true` | |
| imageReflectionController.tag | string | `"v0.30.0"` | | | imageReflectionController.tag | string | `"v0.31.2"` | |
| imageReflectionController.tolerations | list | `[]` | | | imageReflectionController.tolerations | list | `[]` | |
| installCRDs | bool | `true` | | | installCRDs | bool | `true` | |
| kustomizeController.affinity | object | `{}` | | | kustomizeController.affinity | object | `{}` | |
@@ -105,7 +105,7 @@ This helm chart is maintained and released by the fluxcd-community on a best eff
| kustomizeController.serviceAccount.annotations | object | `{}` | | | kustomizeController.serviceAccount.annotations | object | `{}` | |
| kustomizeController.serviceAccount.automount | bool | `true` | | | kustomizeController.serviceAccount.automount | bool | `true` | |
| kustomizeController.serviceAccount.create | bool | `true` | | | kustomizeController.serviceAccount.create | bool | `true` | |
| kustomizeController.tag | string | `"v1.1.1"` | | | kustomizeController.tag | string | `"v1.2.2"` | |
| kustomizeController.tolerations | list | `[]` | | | kustomizeController.tolerations | list | `[]` | |
| logLevel | string | `"info"` | | | logLevel | string | `"info"` | |
| multitenancy.defaultServiceAccount | string | `"default"` | All Kustomizations and HelmReleases which dont have spec.serviceAccountName specified, will use the default account from the tenants namespace. Tenants have to specify a service account in their Flux resources to be able to deploy workloads in their namespaces as the default account has no permissions. | | multitenancy.defaultServiceAccount | string | `"default"` | All Kustomizations and HelmReleases which dont have spec.serviceAccountName specified, will use the default account from the tenants namespace. Tenants have to specify a service account in their Flux resources to be able to deploy workloads in their namespaces as the default account has no permissions. |
@@ -130,7 +130,7 @@ This helm chart is maintained and released by the fluxcd-community on a best eff
| notificationController.serviceAccount.annotations | object | `{}` | | | notificationController.serviceAccount.annotations | object | `{}` | |
| notificationController.serviceAccount.automount | bool | `true` | | | notificationController.serviceAccount.automount | bool | `true` | |
| notificationController.serviceAccount.create | bool | `true` | | | notificationController.serviceAccount.create | bool | `true` | |
| notificationController.tag | string | `"v1.1.0"` | | | notificationController.tag | string | `"v1.2.4"` | |
| notificationController.tolerations | list | `[]` | | | notificationController.tolerations | list | `[]` | |
| notificationController.webhookReceiver.ingress.annotations | object | `{}` | | | notificationController.webhookReceiver.ingress.annotations | object | `{}` | |
| notificationController.webhookReceiver.ingress.create | bool | `false` | | | notificationController.webhookReceiver.ingress.create | bool | `false` | |
@@ -169,6 +169,6 @@ This helm chart is maintained and released by the fluxcd-community on a best eff
| sourceController.serviceAccount.annotations | object | `{}` | | | sourceController.serviceAccount.annotations | object | `{}` | |
| sourceController.serviceAccount.automount | bool | `true` | | | sourceController.serviceAccount.automount | bool | `true` | |
| sourceController.serviceAccount.create | bool | `true` | | | sourceController.serviceAccount.create | bool | `true` | |
| sourceController.tag | string | `"v1.1.2"` | | | sourceController.tag | string | `"v1.2.4"` | |
| sourceController.tolerations | list | `[]` | | | sourceController.tolerations | list | `[]` | |
| watchAllNamespaces | bool | `true` | | | watchAllNamespaces | bool | `true` | |

View File

@@ -15,7 +15,7 @@ metadata:
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: cluster-admin name: {{ .Values.rbac.roleRef.name }}
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: kustomize-controller name: kustomize-controller

File diff suppressed because it is too large Load Diff

View File

@@ -737,6 +737,10 @@ spec:
image: image:
description: Image is the name of the image repository description: Image is the name of the image repository
type: string type: string
insecure:
description: Insecure allows connecting to a non-TLS HTTP container
registry.
type: boolean
interval: interval:
description: Interval is the length of time to wait between scans description: Interval is the length of time to wait between scans
of the image repository. of the image repository.

View File

@@ -8,6 +8,7 @@ metadata:
{{- . | toYaml | nindent 4 }} {{- . | toYaml | nindent 4 }}
{{- end }} {{- end }}
labels: labels:
app.kubernetes.io/component: notification-controller
app.kubernetes.io/instance: '{{ .Release.Namespace }}' app.kubernetes.io/instance: '{{ .Release.Namespace }}'
app.kubernetes.io/managed-by: '{{ .Release.Service }}' app.kubernetes.io/managed-by: '{{ .Release.Service }}'
app.kubernetes.io/part-of: flux app.kubernetes.io/part-of: flux
@@ -33,6 +34,8 @@ spec:
- jsonPath: .status.conditions[?(@.type=="Ready")].message - jsonPath: .status.conditions[?(@.type=="Ready")].message
name: Status name: Status
type: string type: string
deprecated: true
deprecationWarning: v1beta1 Alert is deprecated, upgrade to v1beta3
name: v1beta1 name: v1beta1
schema: schema:
openAPIV3Schema: openAPIV3Schema:
@@ -227,6 +230,8 @@ spec:
- jsonPath: .status.conditions[?(@.type=="Ready")].message - jsonPath: .status.conditions[?(@.type=="Ready")].message
name: Status name: Status
type: string type: string
deprecated: true
deprecationWarning: v1beta2 Alert is deprecated, upgrade to v1beta3
name: v1beta2 name: v1beta2
schema: schema:
openAPIV3Schema: openAPIV3Schema:
@@ -436,9 +441,140 @@ spec:
type: object type: object
type: object type: object
served: true served: true
storage: true storage: false
subresources: subresources:
status: {} status: {}
- additionalPrinterColumns:
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1beta3
schema:
openAPIV3Schema:
description: Alert is the Schema for the alerts API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: AlertSpec defines an alerting rule for events involving a
list of objects.
properties:
eventMetadata:
additionalProperties:
type: string
description: EventMetadata is an optional field for adding metadata
to events dispatched by the controller. This can be used for enhancing
the context of the event. If a field would override one already
present on the original event as generated by the emitter, then
the override doesn't happen, i.e. the original value is preserved,
and an info log is printed.
type: object
eventSeverity:
default: info
description: EventSeverity specifies how to filter events based on
severity. If set to 'info' no events will be filtered.
enum:
- info
- error
type: string
eventSources:
description: EventSources specifies how to filter events based on
the involved object kind, name and namespace.
items:
description: CrossNamespaceObjectReference contains enough information
to let you locate the typed referenced object at cluster level
properties:
apiVersion:
description: API version of the referent
type: string
kind:
description: Kind of the referent
enum:
- Bucket
- GitRepository
- Kustomization
- HelmRelease
- HelmChart
- HelmRepository
- ImageRepository
- ImagePolicy
- ImageUpdateAutomation
- OCIRepository
type: string
matchLabels:
additionalProperties:
type: string
description: MatchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed. MatchLabels requires the name to be set to `*`.
type: object
name:
description: Name of the referent If multiple resources are
targeted `*` may be set.
maxLength: 53
minLength: 1
type: string
namespace:
description: Namespace of the referent
maxLength: 53
minLength: 1
type: string
required:
- kind
- name
type: object
type: array
exclusionList:
description: ExclusionList specifies a list of Golang regular expressions
to be used for excluding messages.
items:
type: string
type: array
inclusionList:
description: InclusionList specifies a list of Golang regular expressions
to be used for including messages.
items:
type: string
type: array
providerRef:
description: ProviderRef specifies which Provider this Alert should
use.
properties:
name:
description: Name of the referent.
type: string
required:
- name
type: object
summary:
description: Summary holds a short description of the impact and affected
cluster.
maxLength: 255
type: string
suspend:
description: Suspend tells the controller to suspend subsequent events
handling for this Alert.
type: boolean
required:
- eventSources
- providerRef
type: object
type: object
served: true
storage: true
subresources: {}
--- ---
apiVersion: apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition kind: CustomResourceDefinition
@@ -449,6 +585,7 @@ metadata:
{{- . | toYaml | nindent 4 }} {{- . | toYaml | nindent 4 }}
{{- end }} {{- end }}
labels: labels:
app.kubernetes.io/component: notification-controller
app.kubernetes.io/instance: '{{ .Release.Namespace }}' app.kubernetes.io/instance: '{{ .Release.Namespace }}'
app.kubernetes.io/managed-by: '{{ .Release.Service }}' app.kubernetes.io/managed-by: '{{ .Release.Service }}'
app.kubernetes.io/part-of: flux app.kubernetes.io/part-of: flux
@@ -474,6 +611,8 @@ spec:
- jsonPath: .status.conditions[?(@.type=="Ready")].message - jsonPath: .status.conditions[?(@.type=="Ready")].message
name: Status name: Status
type: string type: string
deprecated: true
deprecationWarning: v1beta1 Provider is deprecated, upgrade to v1beta3
name: v1beta1 name: v1beta1
schema: schema:
openAPIV3Schema: openAPIV3Schema:
@@ -657,6 +796,8 @@ spec:
- jsonPath: .status.conditions[?(@.type=="Ready")].message - jsonPath: .status.conditions[?(@.type=="Ready")].message
name: Status name: Status
type: string type: string
deprecated: true
deprecationWarning: v1beta2 Provider is deprecated, upgrade to v1beta3
name: v1beta2 name: v1beta2
schema: schema:
openAPIV3Schema: openAPIV3Schema:
@@ -741,6 +882,7 @@ spec:
- github - github
- gitlab - gitlab
- gitea - gitea
- bitbucketserver
- bitbucket - bitbucket
- azuredevops - azuredevops
- googlechat - googlechat
@@ -851,9 +993,127 @@ spec:
type: object type: object
type: object type: object
served: true served: true
storage: true storage: false
subresources: subresources:
status: {} status: {}
- additionalPrinterColumns:
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1beta3
schema:
openAPIV3Schema:
description: Provider is the Schema for the providers API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ProviderSpec defines the desired state of the Provider.
properties:
address:
description: Address specifies the endpoint, in a generic sense, to
where alerts are sent. What kind of endpoint depends on the specific
Provider type being used. For the generic Provider, for example,
this is an HTTP/S address. For other Provider types this could be
a project ID or a namespace.
maxLength: 2048
type: string
certSecretRef:
description: "CertSecretRef specifies the Secret containing a PEM-encoded
CA certificate (in the `ca.crt` key). \n Note: Support for the `caFile`
key has been deprecated."
properties:
name:
description: Name of the referent.
type: string
required:
- name
type: object
channel:
description: Channel specifies the destination channel where events
should be posted.
maxLength: 2048
type: string
interval:
description: Interval at which to reconcile the Provider with its
Secret references. Deprecated and not used in v1beta3.
pattern: ^([0-9]+(\.[0-9]+)?(ms|s|m|h))+$
type: string
proxy:
description: Proxy the HTTP/S address of the proxy server.
maxLength: 2048
pattern: ^(http|https)://.*$
type: string
secretRef:
description: SecretRef specifies the Secret containing the authentication
credentials for this Provider.
properties:
name:
description: Name of the referent.
type: string
required:
- name
type: object
suspend:
description: Suspend tells the controller to suspend subsequent events
handling for this Provider.
type: boolean
timeout:
description: Timeout for sending alerts to the Provider.
pattern: ^([0-9]+(\.[0-9]+)?(ms|s|m))+$
type: string
type:
description: Type specifies which Provider implementation to use.
enum:
- slack
- discord
- msteams
- rocket
- generic
- generic-hmac
- github
- gitlab
- gitea
- bitbucketserver
- bitbucket
- azuredevops
- googlechat
- googlepubsub
- webex
- sentry
- azureeventhub
- telegram
- lark
- matrix
- opsgenie
- alertmanager
- grafana
- githubdispatch
- pagerduty
- datadog
- nats
type: string
username:
description: Username specifies the name under which events are posted.
maxLength: 2048
type: string
required:
- type
type: object
type: object
served: true
storage: true
subresources: {}
--- ---
apiVersion: apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition kind: CustomResourceDefinition
@@ -864,6 +1124,7 @@ metadata:
{{- . | toYaml | nindent 4 }} {{- . | toYaml | nindent 4 }}
{{- end }} {{- end }}
labels: labels:
app.kubernetes.io/component: notification-controller
app.kubernetes.io/instance: '{{ .Release.Namespace }}' app.kubernetes.io/instance: '{{ .Release.Namespace }}'
app.kubernetes.io/managed-by: '{{ .Release.Service }}' app.kubernetes.io/managed-by: '{{ .Release.Service }}'
app.kubernetes.io/part-of: flux app.kubernetes.io/part-of: flux

View File

@@ -341,6 +341,10 @@ spec:
to ensure efficient use of resources. to ensure efficient use of resources.
pattern: ^([0-9]+(\.[0-9]+)?(ms|s|m|h))+$ pattern: ^([0-9]+(\.[0-9]+)?(ms|s|m|h))+$
type: string type: string
prefix:
description: Prefix to use for server-side filtering of files in the
Bucket.
type: string
provider: provider:
default: generic default: generic
description: Provider of the object storage bucket. Defaults to 'generic', description: Provider of the object storage bucket. Defaults to 'generic',
@@ -2150,6 +2154,32 @@ spec:
Chart dependencies, which are not bundled in the umbrella chart Chart dependencies, which are not bundled in the umbrella chart
artifact, are not verified. artifact, are not verified.
properties: properties:
matchOIDCIdentity:
description: MatchOIDCIdentity specifies the identity matching
criteria to use while verifying an OCI artifact which was signed
using Cosign keyless signing. The artifact's identity is deemed
to be verified if any of the specified matchers match against
the identity.
items:
description: OIDCIdentityMatch specifies options for verifying
the certificate identity, i.e. the issuer and the subject
of the certificate.
properties:
issuer:
description: Issuer specifies the regex pattern to match
against to verify the OIDC issuer in the Fulcio certificate.
The pattern must be a valid Go regular expression.
type: string
subject:
description: Subject specifies the regex pattern to match
against to verify the identity subject in the Fulcio certificate.
The pattern must be a valid Go regular expression.
type: string
required:
- issuer
- subject
type: object
type: array
provider: provider:
default: cosign default: cosign
description: Provider specifies the technology used to sign the description: Provider specifies the technology used to sign the
@@ -2653,6 +2683,11 @@ spec:
required: required:
- name - name
type: object type: object
insecure:
description: Insecure allows connecting to a non-TLS HTTP container
registry. This field is only taken into account if the .spec.type
field is set to 'oci'.
type: boolean
interval: interval:
description: Interval at which the HelmRepository URL is checked for description: Interval at which the HelmRepository URL is checked for
updates. This interval is approximate and may be subject to jitter updates. This interval is approximate and may be subject to jitter
@@ -2697,10 +2732,10 @@ spec:
of this HelmRepository. of this HelmRepository.
type: boolean type: boolean
timeout: timeout:
default: 60s
description: Timeout is used for the index fetch operation for an description: Timeout is used for the index fetch operation for an
HTTPS helm repository, and for remote OCI Repository operations HTTPS helm repository, and for remote OCI Repository operations
like pulling for an OCI helm repository. Its default value is 60s. like pulling for an OCI helm chart by the associated HelmChart.
Its default value is 60s.
pattern: ^([0-9]+(\.[0-9]+)?(ms|s|m))+$ pattern: ^([0-9]+(\.[0-9]+)?(ms|s|m))+$
type: string type: string
type: type:
@@ -2713,9 +2748,9 @@ spec:
url: url:
description: URL of the Helm repository, a valid URL contains at least description: URL of the Helm repository, a valid URL contains at least
a protocol and host. a protocol and host.
pattern: ^(http|https|oci)://.*$
type: string type: string
required: required:
- interval
- url - url
type: object type: object
status: status:
@@ -3033,6 +3068,32 @@ spec:
public keys used to verify the signature and specifies which provider public keys used to verify the signature and specifies which provider
to use to check whether OCI image is authentic. to use to check whether OCI image is authentic.
properties: properties:
matchOIDCIdentity:
description: MatchOIDCIdentity specifies the identity matching
criteria to use while verifying an OCI artifact which was signed
using Cosign keyless signing. The artifact's identity is deemed
to be verified if any of the specified matchers match against
the identity.
items:
description: OIDCIdentityMatch specifies options for verifying
the certificate identity, i.e. the issuer and the subject
of the certificate.
properties:
issuer:
description: Issuer specifies the regex pattern to match
against to verify the OIDC issuer in the Fulcio certificate.
The pattern must be a valid Go regular expression.
type: string
subject:
description: Subject specifies the regex pattern to match
against to verify the identity subject in the Fulcio certificate.
The pattern must be a valid Go regular expression.
type: string
required:
- issuer
- subject
type: object
type: array
provider: provider:
default: cosign default: cosign
description: Provider specifies the technology used to sign the description: Provider specifies the technology used to sign the

View File

@@ -15,11 +15,7 @@ metadata:
{{- end }} {{- end }}
name: source-controller name: source-controller
spec: spec:
{{- if kindIs "invalid" .Values.sourceController.replicas }}
replicas: 1 replicas: 1
{{- else }}
replicas: {{ .Values.sourceController.replicas }}
{{- end}}
selector: selector:
matchLabels: matchLabels:
app: source-controller app: source-controller

View File

@@ -23,7 +23,7 @@ clusterDomain: cluster.local
cli: cli:
image: ghcr.io/fluxcd/flux-cli image: ghcr.io/fluxcd/flux-cli
tag: v2.1.2 tag: v2.2.3
nodeSelector: {} nodeSelector: {}
affinity: {} affinity: {}
tolerations: [] tolerations: []
@@ -36,7 +36,7 @@ cli:
helmController: helmController:
create: true create: true
image: ghcr.io/fluxcd/helm-controller image: ghcr.io/fluxcd/helm-controller
tag: v0.36.2 tag: v0.37.4
resources: resources:
limits: {} limits: {}
# cpu: 1000m # cpu: 1000m
@@ -84,7 +84,7 @@ helmController:
imageAutomationController: imageAutomationController:
create: true create: true
image: ghcr.io/fluxcd/image-automation-controller image: ghcr.io/fluxcd/image-automation-controller
tag: v0.36.1 tag: v0.37.1
resources: resources:
limits: {} limits: {}
# cpu: 1000m # cpu: 1000m
@@ -112,7 +112,7 @@ imageAutomationController:
imageReflectionController: imageReflectionController:
create: true create: true
image: ghcr.io/fluxcd/image-reflector-controller image: ghcr.io/fluxcd/image-reflector-controller
tag: v0.30.0 tag: v0.31.2
resources: resources:
limits: {} limits: {}
# cpu: 1000m # cpu: 1000m
@@ -140,7 +140,7 @@ imageReflectionController:
kustomizeController: kustomizeController:
create: true create: true
image: ghcr.io/fluxcd/kustomize-controller image: ghcr.io/fluxcd/kustomize-controller
tag: v1.1.1 tag: v1.2.2
resources: resources:
limits: {} limits: {}
# cpu: 1000m # cpu: 1000m
@@ -188,7 +188,7 @@ kustomizeController:
notificationController: notificationController:
create: true create: true
image: ghcr.io/fluxcd/notification-controller image: ghcr.io/fluxcd/notification-controller
tag: v1.1.0 tag: v1.2.4
resources: resources:
limits: {} limits: {}
# cpu: 1000m # cpu: 1000m
@@ -241,7 +241,7 @@ notificationController:
sourceController: sourceController:
create: true create: true
image: ghcr.io/fluxcd/source-controller image: ghcr.io/fluxcd/source-controller
tag: v1.1.2 tag: v1.2.4
resources: resources:
limits: {} limits: {}
# cpu: 1000m # cpu: 1000m
@@ -278,6 +278,8 @@ rbac:
createAggregation: true createAggregation: true
# -- Add annotations to all RBAC resources, e.g. "helm.sh/resource-policy": keep # -- Add annotations to all RBAC resources, e.g. "helm.sh/resource-policy": keep
annotations: {} annotations: {}
roleRef:
name: cluster-admin
logLevel: info logLevel: info
watchAllNamespaces: true watchAllNamespaces: true

View File

@@ -1,2 +1,2 @@
name: cozy-installer name: cozy-installer
version: 1.0.0 version: 0.2.0

View File

@@ -1,9 +1,9 @@
NAMESPACE=cozy-installer NAMESPACE=cozy-system
NAME=installer NAME=installer
PUSH := 1 PUSH := 1
LOAD := 0 LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.1.0 TAG := v0.2.0
TALOS_VERSION=$(shell awk '/^version:/ {print $$2}' images/talos/profiles/installer.yaml) TALOS_VERSION=$(shell awk '/^version:/ {print $$2}' images/talos/profiles/installer.yaml)
show: show:
@@ -21,6 +21,7 @@ update:
image: image-cozystack image-talos image-matchbox image: image-cozystack image-talos image-matchbox
image-cozystack: image-cozystack:
make -C ../../.. repos
docker buildx build -f images/cozystack/Dockerfile ../../.. \ docker buildx build -f images/cozystack/Dockerfile ../../.. \
--provenance false \ --provenance false \
--tag $(REGISTRY)/cozystack:$(TAG) \ --tag $(REGISTRY)/cozystack:$(TAG) \

View File

@@ -82,6 +82,9 @@ spec:
- key: "node.kubernetes.io/not-ready" - key: "node.kubernetes.io/not-ready"
operator: "Exists" operator: "Exists"
effect: "NoSchedule" effect: "NoSchedule"
- key: "node.cilium.io/agent-not-ready"
operator: "Exists"
effect: "NoSchedule"
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service

View File

@@ -1,2 +1,2 @@
name: cozy-platform name: cozy-platform
version: 1.0.0 version: 0.2.0

View File

@@ -13,7 +13,7 @@ namespaces-show:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml
namespaces-apply: namespaces-apply:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl apply -f- helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl apply -n $(NAMESPACE) -f-
diff: diff:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl diff -f- helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) | kubectl diff -f-

View File

@@ -0,0 +1,102 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: cilium
releaseName: cilium
chart: cozy-cilium
namespace: cozy-cilium
privileged: true
dependsOn: []
values:
cilium:
bpf:
masquerade: true
cni:
chainingMode: ~
customConf: false
configMap: ""
enableIPv4Masquerade: true
enableIdentityMark: true
ipv4NativeRoutingCIDR: "{{ index $cozyConfig.data "ipv4-pod-cidr" }}"
autoDirectNodeRoutes: true
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: [cilium]
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cilium,cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cilium,cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [cilium,victoria-metrics-operator]
- name: metallb
releaseName: metallb
chart: cozy-metallb
namespace: cozy-metallb
privileged: true
dependsOn: [cilium]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: [cilium]
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cilium,cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cilium,cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: [cilium]
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: [cilium]
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cilium,cert-manager]
- name: linstor
releaseName: linstor
chart: cozy-linstor
namespace: cozy-linstor
privileged: true
dependsOn: [piraeus-operator,cilium,cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: []

View File

@@ -0,0 +1,171 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: cilium
releaseName: cilium
chart: cozy-cilium
namespace: cozy-cilium
privileged: true
dependsOn: []
- name: kubeovn
releaseName: kubeovn
chart: cozy-kubeovn
namespace: cozy-kubeovn
privileged: true
dependsOn: [cilium]
values:
cozystack:
nodesHash: {{ include "cozystack.master-node-ips" . | sha256sum }}
kube-ovn:
ipv4:
POD_CIDR: "{{ index $cozyConfig.data "ipv4-pod-cidr" }}"
POD_GATEWAY: "{{ index $cozyConfig.data "ipv4-pod-gateway" }}"
SVC_CIDR: "{{ index $cozyConfig.data "ipv4-svc-cidr" }}"
JOIN_CIDR: "{{ index $cozyConfig.data "ipv4-join-cidr" }}"
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: [cilium,kubeovn]
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cilium,kubeovn,cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cilium,kubeovn,cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [cilium,kubeovn,victoria-metrics-operator]
- name: kubevirt-operator
releaseName: kubevirt-operator
chart: cozy-kubevirt-operator
namespace: cozy-kubevirt
dependsOn: [cilium,kubeovn]
- name: kubevirt
releaseName: kubevirt
chart: cozy-kubevirt
namespace: cozy-kubevirt
privileged: true
dependsOn: [cilium,kubeovn,kubevirt-operator]
- name: kubevirt-cdi-operator
releaseName: kubevirt-cdi-operator
chart: cozy-kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
dependsOn: [cilium,kubeovn]
- name: kubevirt-cdi
releaseName: kubevirt-cdi
chart: cozy-kubevirt-cdi
namespace: cozy-kubevirt-cdi
dependsOn: [cilium,kubeovn,kubevirt-cdi-operator]
- name: metallb
releaseName: metallb
chart: cozy-metallb
namespace: cozy-metallb
privileged: true
dependsOn: [cilium,kubeovn]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: [cilium,kubeovn]
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cilium,kubeovn,cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cilium,kubeovn,cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: [cilium,kubeovn]
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: [cilium,kubeovn]
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cilium,kubeovn,cert-manager]
- name: linstor
releaseName: linstor
chart: cozy-linstor
namespace: cozy-linstor
privileged: true
dependsOn: [piraeus-operator,cilium,kubeovn,cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: [cilium,kubeovn]
- name: dashboard
releaseName: dashboard
chart: cozy-dashboard
namespace: cozy-dashboard
dependsOn: [cilium,kubeovn]
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1beta2" }}
{{- with (lookup "source.toolkit.fluxcd.io/v1beta2" "HelmRepository" "cozy-public" "").items }}
values:
kubeapps:
redis:
master:
podAnnotations:
{{- range $index, $repo := . }}
{{- with (($repo.status).artifact).revision }}
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
- name: kamaji
releaseName: kamaji
chart: cozy-kamaji
namespace: cozy-kamaji
dependsOn: [cilium,kubeovn,cert-manager]
- name: capi-operator
releaseName: capi-operator
chart: cozy-capi-operator
namespace: cozy-cluster-api
privileged: true
dependsOn: [cilium,kubeovn,cert-manager]
- name: capi-providers
releaseName: capi-providers
chart: cozy-capi-providers
namespace: cozy-cluster-api
privileged: true
dependsOn: [cilium,kubeovn,capi-operator]

View File

@@ -0,0 +1,63 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: []
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [victoria-metrics-operator]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: []
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: []
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: []
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: []

View File

@@ -0,0 +1,89 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: []
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [victoria-metrics-operator]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: []
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: []
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: []
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: []
- name: dashboard
releaseName: dashboard
chart: cozy-dashboard
namespace: cozy-dashboard
dependsOn: []
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1beta2" }}
{{- with (lookup "source.toolkit.fluxcd.io/v1beta2" "HelmRepository" "cozy-public" "").items }}
values:
kubeapps:
redis:
master:
podAnnotations:
{{- range $index, $repo := . }}
{{- with (($repo.status).artifact).revision }}
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,7 +1,7 @@
{{/* {{/*
Get IP-addresses of master nodes Get IP-addresses of master nodes
*/}} */}}
{{- define "master.nodeIPs" -}} {{- define "cozystack.master-node-ips" -}}
{{- $nodes := lookup "v1" "Node" "" "" -}} {{- $nodes := lookup "v1" "Node" "" "" -}}
{{- $ips := list -}} {{- $ips := list -}}
{{- range $node := $nodes.items -}} {{- range $node := $nodes.items -}}

View File

@@ -1,7 +1,10 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{- $host := "example.org" }} {{- $host := "example.org" }}
{{- $tenantRoot := list }} {{- $tenantRoot := list }}
{{- if .Capabilities.APIVersions.Has "helm.toolkit.fluxcd.io/v2beta1" }} {{- if .Capabilities.APIVersions.Has "helm.toolkit.fluxcd.io/v2beta2" }}
{{- $tenantRoot = lookup "helm.toolkit.fluxcd.io/v2beta1" "HelmRelease" "tenant-root" "tenant-root" }} {{- $tenantRoot = lookup "helm.toolkit.fluxcd.io/v2beta2" "HelmRelease" "tenant-root" "tenant-root" }}
{{- end }} {{- end }}
{{- if and $tenantRoot $tenantRoot.spec $tenantRoot.spec.values $tenantRoot.spec.values.host }} {{- if and $tenantRoot $tenantRoot.spec $tenantRoot.spec.values $tenantRoot.spec.values.host }}
{{- $host = $tenantRoot.spec.values.host }} {{- $host = $tenantRoot.spec.values.host }}
@@ -19,7 +22,7 @@ metadata:
namespace.cozystack.io/host: "{{ $host }}" namespace.cozystack.io/host: "{{ $host }}"
name: tenant-root name: tenant-root
--- ---
apiVersion: helm.toolkit.fluxcd.io/v2beta1 apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease kind: HelmRelease
metadata: metadata:
name: tenant-root name: tenant-root
@@ -45,7 +48,9 @@ spec:
values: values:
host: "{{ $host }}" host: "{{ $host }}"
dependsOn: dependsOn:
- name: cilium {{- range $x := $bundle.releases }}
namespace: cozy-cilium {{- if has $x.name (list "cilium" "kubeovn") }}
- name: kubeovn - name: {{ $x.name }}
namespace: cozy-kubeovn namespace: {{ $x.namespace }}
{{- end }}
{{- end }}

View File

@@ -1,38 +1,27 @@
apiVersion: helm.toolkit.fluxcd.io/v2beta1 {{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
kind: HelmRelease {{- $bundleName := index $cozyConfig.data "bundle-name" }}
metadata: {{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
name: cilium {{- $dependencyNamespaces := dict }}
namespace: cozy-cilium {{- $disabledComponents := splitList "," ((index $cozyConfig.data "bundle-disable") | default "") }}
labels:
cozystack.io/repository: system {{/* collect dependency namespaces from releases */}}
spec: {{- range $x := $bundle.releases }}
interval: 1m {{- $_ := set $dependencyNamespaces $x.name $x.namespace }}
releaseName: cilium {{- end }}
install:
remediation: {{- range $x := $bundle.releases }}
retries: -1 {{- if not (has $x.name $disabledComponents) }}
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cilium
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
--- ---
apiVersion: helm.toolkit.fluxcd.io/v2beta1 apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease kind: HelmRelease
metadata: metadata:
name: kubeovn name: {{ $x.name }}
namespace: cozy-kubeovn namespace: {{ $x.namespace }}
labels: labels:
cozystack.io/repository: system cozystack.io/repository: system
spec: spec:
interval: 1m interval: 1m
releaseName: kubeovn releaseName: {{ $x.releaseName | default $x.name }}
install: install:
remediation: remediation:
retries: -1 retries: -1
@@ -41,718 +30,31 @@ spec:
retries: -1 retries: -1
chart: chart:
spec: spec:
chart: cozy-kubeovn chart: {{ $x.chart }}
reconcileStrategy: Revision reconcileStrategy: Revision
sourceRef: sourceRef:
kind: HelmRepository kind: HelmRepository
name: cozystack-system name: cozystack-system
namespace: cozy-system namespace: cozy-system
{{- $values := dict }}
{{- with $x.values }}
{{- $values = merge . $values }}
{{- end }}
{{- with index $cozyConfig.data (printf "values-%s" $x.name) }}
{{- $values = merge (fromYaml .) $values }}
{{- end }}
{{- with $values }}
values: values:
cozystack: {{- toYaml . | nindent 4}}
configHash: {{ index (lookup "v1" "ConfigMap" "cozy-system" "cozystack") "data" | toJson | sha256sum }} {{- end }}
nodesHash: {{ include "master.nodeIPs" . | sha256sum }} {{- with $x.dependsOn }}
dependsOn: dependsOn:
- name: cilium {{- range $dep := . }}
namespace: cozy-cilium {{- if not (has $dep $disabledComponents) }}
--- - name: {{ $dep }}
apiVersion: helm.toolkit.fluxcd.io/v2beta1 namespace: {{ index $dependencyNamespaces $dep }}
kind: HelmRelease {{- end }}
metadata:
name: cozy-fluxcd
namespace: cozy-fluxcd
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: fluxcd
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-fluxcd
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cert-manager
namespace: cozy-cert-manager
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: cert-manager
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cert-manager
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cert-manager-issuers
namespace: cozy-cert-manager
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: cert-manager-issuers
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cert-manager-issuers
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: victoria-metrics-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-victoria-metrics-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: monitoring
namespace: cozy-monitoring
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: monitoring
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-monitoring
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-operator
namespace: cozy-kubevirt
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt
namespace: cozy-kubevirt
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: kubevirt-operator
namespace: cozy-kubevirt
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-cdi-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-cdi-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-cdi
namespace: cozy-kubevirt-cdi
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-cdi
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-cdi
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: metallb
namespace: cozy-metallb
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: metallb
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-metallb
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: grafana-operator
namespace: cozy-grafana-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: grafana-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-grafana-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: mariadb-operator
namespace: cozy-mariadb-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: mariadb-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-mariadb-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
- name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: postgres-operator
namespace: cozy-postgres-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: postgres-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-postgres-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: rabbitmq-operator
namespace: cozy-rabbitmq-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: rabbitmq-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-rabbitmq-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: redis-operator
namespace: cozy-redis-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: redis-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-redis-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: piraeus-operator
namespace: cozy-linstor
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: piraeus-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-piraeus-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: linstor
namespace: cozy-linstor
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: linstor
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-linstor
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: piraeus-operator
namespace: cozy-linstor
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: telepresence
namespace: cozy-telepresence
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: traffic-manager
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-telepresence
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: dashboard
namespace: cozy-dashboard
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: dashboard
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-dashboard
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1beta2" }}
{{- with (lookup "source.toolkit.fluxcd.io/v1beta2" "HelmRepository" "cozy-public" "").items }}
values:
kubeapps:
redis:
master:
podAnnotations:
{{- range $index, $repo := . }}
{{- with (($repo.status).artifact).revision }}
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kamaji
namespace: cozy-kamaji
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kamaji
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kamaji
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: capi-operator
namespace: cozy-cluster-api
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: capi-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-capi-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: capi-providers
namespace: cozy-cluster-api
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: capi-providers
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-capi-providers
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: capi-operator
namespace: cozy-cluster-api
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn

View File

@@ -1,13 +1,31 @@
{{- range $ns := .Values.namespaces }} {{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{- $namespaces := dict }}
{{/* collect namespaces from releases */}}
{{- range $x := $bundle.releases }}
{{- if not (hasKey $namespaces $x.namespace) }}
{{- $_ := set $namespaces $x.namespace false }}
{{- end }}
{{/* if at least one release requires a privileged namespace, then it should be privileged */}}
{{- if or $x.privileged (index $namespaces $x.namespace) }}
{{- $_ := set $namespaces $x.namespace true }}
{{- end }}
{{- end }}
{{- $_ := set $namespaces "cozy-fluxcd" false }}
{{- range $namespace, $privileged := $namespaces }}
--- ---
apiVersion: v1 apiVersion: v1
kind: Namespace kind: Namespace
metadata: metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
{{- if $ns.privileged }} {{- if $privileged }}
labels: labels:
pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce: privileged
{{- end }} {{- end }}
name: {{ $ns.name }} name: {{ $namespace }}
{{- end }} {{- end }}

View File

@@ -1,30 +0,0 @@
namespaces:
- name: cozy-public
- name: cozy-system
privileged: true
- name: cozy-cert-manager
- name: cozy-cilium
privileged: true
- name: cozy-fluxcd
- name: cozy-grafana-operator
- name: cozy-kamaji
- name: cozy-cluster-api
privileged: true # for capk only
- name: cozy-dashboard
- name: cozy-kubeovn
privileged: true
- name: cozy-kubevirt
privileged: true
- name: cozy-kubevirt-cdi
- name: cozy-linstor
privileged: true
- name: cozy-mariadb-operator
- name: cozy-metallb
privileged: true
- name: cozy-monitoring
privileged: true
- name: cozy-postgres-operator
- name: cozy-rabbitmq-operator
- name: cozy-redis-operator
- name: cozy-telepresence
- name: cozy-victoria-metrics-operator

View File

@@ -1,4 +1,5 @@
OUT=../../_out/repos/system OUT=../../_out/repos/system
VERSION := 0.2.0
gen: fix-chartnames gen: fix-chartnames
@@ -9,4 +10,4 @@ repo: fix-chartnames
cd "$(OUT)" && helm repo index . cd "$(OUT)" && helm repo index .
fix-chartnames: fix-chartnames:
find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do printf "name: cozy-%s\nversion: 1.0.0\n" "$$i" > "$$i/Chart.yaml"; done find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do printf "name: cozy-%s\nversion: $(VERSION)\n" "$$i" > "$$i/Chart.yaml"; done

View File

@@ -1,2 +1,2 @@
name: cozy-capi-operator name: cozy-capi-operator
version: 1.0.0 version: 0.2.0

View File

@@ -1,2 +1,2 @@
name: cozy-capi-providers name: cozy-capi-providers
version: 1.0.0 version: 0.2.0

View File

@@ -1,2 +1,2 @@
name: cozy-cert-manager-issuers name: cozy-cert-manager-issuers
version: 1.0.0 version: 0.2.0

View File

@@ -1,2 +1,2 @@
name: cozy-cert-manager name: cozy-cert-manager
version: 1.0.0 version: 0.2.0

View File

@@ -1,2 +1,2 @@
name: cozy-cilium name: cozy-cilium
version: 1.0.0 version: 0.2.0

View File

@@ -2,18 +2,18 @@ NAMESPACE=cozy-cilium
NAME=cilium NAME=cilium
show: show:
helm template --dry-run=server -n $(NAMESPACE) $(NAME) . kubectl get hr -n cozy-cilium cilium -o jsonpath='{.spec.values}' | helm template --dry-run=server -n $(NAMESPACE) $(NAME) . -f -
apply: apply:
helm upgrade -i -n $(NAMESPACE) $(NAME) . kubectl get hr -n cozy-cilium cilium -o jsonpath='{.spec.values}' | helm upgrade -i -n $(NAMESPACE) $(NAME) . -f -
diff: diff:
helm diff upgrade --allow-unreleased --normalize-manifests -n $(NAMESPACE) $(NAME) . kubectl get hr -n cozy-cilium cilium -o jsonpath='{.spec.values}' | helm diff upgrade --allow-unreleased --normalize-manifests -n $(NAMESPACE) $(NAME) . -f -
update: update:
rm -rf charts rm -rf charts
helm repo add cilium https://helm.cilium.io/ helm repo add cilium https://helm.cilium.io/
helm repo update cilium helm repo update cilium
helm pull cilium/cilium --untar --untardir charts helm pull cilium/cilium --untar --untardir charts --version 1.14
sed -i -e '/Used in iptables/d' -e '/SYS_MODULE/d' charts/cilium/values.yaml sed -i -e '/Used in iptables/d' -e '/SYS_MODULE/d' charts/cilium/values.yaml
patch -p3 < patches/fix-cgroups.patch patch -p3 --no-backup-if-mismatch < patches/fix-cgroups.patch

View File

@@ -122,7 +122,7 @@ annotations:
description: | description: |
CiliumPodIPPool defines an IP pool that can be used for pooled IPAM (i.e. the multi-pool IPAM mode). CiliumPodIPPool defines an IP pool that can be used for pooled IPAM (i.e. the multi-pool IPAM mode).
apiVersion: v2 apiVersion: v2
appVersion: 1.14.5 appVersion: 1.14.9
description: eBPF-based Networking, Security, and Observability description: eBPF-based Networking, Security, and Observability
home: https://cilium.io/ home: https://cilium.io/
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@v1.14/Documentation/images/logo-solo.svg icon: https://cdn.jsdelivr.net/gh/cilium/cilium@v1.14/Documentation/images/logo-solo.svg
@@ -138,4 +138,4 @@ kubeVersion: '>= 1.16.0-0'
name: cilium name: cilium
sources: sources:
- https://github.com/cilium/cilium - https://github.com/cilium/cilium
version: 1.14.5 version: 1.14.9

View File

@@ -1,6 +1,6 @@
# cilium # cilium
![Version: 1.14.5](https://img.shields.io/badge/Version-1.14.5-informational?style=flat-square) ![AppVersion: 1.14.5](https://img.shields.io/badge/AppVersion-1.14.5-informational?style=flat-square) ![Version: 1.14.9](https://img.shields.io/badge/Version-1.14.9-informational?style=flat-square) ![AppVersion: 1.14.9](https://img.shields.io/badge/AppVersion-1.14.9-informational?style=flat-square)
Cilium is open source software for providing and transparently securing Cilium is open source software for providing and transparently securing
network connectivity and loadbalancing between application workloads such as network connectivity and loadbalancing between application workloads such as
@@ -76,7 +76,7 @@ contributors across the globe, there is almost always someone available to help.
| authentication.mutual.spire.install.agent.securityContext | object | `{}` | Security context to be added to spire agent containers. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container | | authentication.mutual.spire.install.agent.securityContext | object | `{}` | Security context to be added to spire agent containers. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container |
| authentication.mutual.spire.install.agent.serviceAccount | object | `{"create":true,"name":"spire-agent"}` | SPIRE agent service account | | authentication.mutual.spire.install.agent.serviceAccount | object | `{"create":true,"name":"spire-agent"}` | SPIRE agent service account |
| authentication.mutual.spire.install.agent.skipKubeletVerification | bool | `true` | SPIRE Workload Attestor kubelet verification. | | authentication.mutual.spire.install.agent.skipKubeletVerification | bool | `true` | SPIRE Workload Attestor kubelet verification. |
| authentication.mutual.spire.install.agent.tolerations | list | `[]` | SPIRE agent tolerations configuration ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ | | authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true | | authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into | | authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration | | authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
@@ -155,12 +155,12 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.apiserver.extraEnv | list | `[]` | Additional clustermesh-apiserver environment variables. | | clustermesh.apiserver.extraEnv | list | `[]` | Additional clustermesh-apiserver environment variables. |
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. | | clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. | | clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:7eaa35cf5452c43b1f7d0cde0d707823ae7e49965bcb54c053e31ea4e04c3d96","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.14.5","useDigest":true}` | Clustermesh API server image. | | clustermesh.apiserver.image | object | `{"digest":"sha256:5c16f8b8e22ce41e11998e70846fbcecea3a6b683a38253809ead8d871f6d8a3","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.14.9","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.kvstoremesh.enabled | bool | `false` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. | | clustermesh.apiserver.kvstoremesh.enabled | bool | `false` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. |
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. | | clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. | | clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
| clustermesh.apiserver.kvstoremesh.extraVolumeMounts | list | `[]` | Additional KVStoreMesh volumeMounts. | | clustermesh.apiserver.kvstoremesh.extraVolumeMounts | list | `[]` | Additional KVStoreMesh volumeMounts. |
| clustermesh.apiserver.kvstoremesh.image | object | `{"digest":"sha256:d7137edd0efa2b1407b20088af3980a9993bb616d85bf9b55ea2891d1b99023a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/kvstoremesh","tag":"v1.14.5","useDigest":true}` | KVStoreMesh image. | | clustermesh.apiserver.kvstoremesh.image | object | `{"digest":"sha256:9d9efb25806660f3663b9cd803fb8679f2b115763470002a9770e2c1eb1e5b22","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/kvstoremesh","tag":"v1.14.9","useDigest":true}` | KVStoreMesh image. |
| clustermesh.apiserver.kvstoremesh.resources | object | `{}` | Resource requests and limits for the KVStoreMesh container | | clustermesh.apiserver.kvstoremesh.resources | object | `{}` | Resource requests and limits for the KVStoreMesh container |
| clustermesh.apiserver.kvstoremesh.securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}}` | KVStoreMesh Security context | | clustermesh.apiserver.kvstoremesh.securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}}` | KVStoreMesh Security context |
| clustermesh.apiserver.metrics.enabled | bool | `true` | Enables exporting apiserver metrics in OpenMetrics format. | | clustermesh.apiserver.metrics.enabled | bool | `true` | Enables exporting apiserver metrics in OpenMetrics format. |
@@ -300,7 +300,7 @@ contributors across the globe, there is almost always someone available to help.
| eni.subnetIDsFilter | list | `[]` | Filter via subnet IDs which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. | | eni.subnetIDsFilter | list | `[]` | Filter via subnet IDs which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. |
| eni.subnetTagsFilter | list | `[]` | Filter via tags (k=v) which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. | | eni.subnetTagsFilter | list | `[]` | Filter via tags (k=v) which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. |
| eni.updateEC2AdapterLimitViaAPI | bool | `true` | Update ENI Adapter limits from the EC2 API | | eni.updateEC2AdapterLimitViaAPI | bool | `true` | Update ENI Adapter limits from the EC2 API |
| envoy.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium-envoy"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-envoy. | | envoy.affinity | object | `{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"cilium.io/no-schedule","operator":"NotIn","values":["true"]}]}]}},"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]},"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium-envoy"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-envoy. |
| envoy.connectTimeoutSeconds | int | `2` | Time in seconds after which a TCP connection attempt times out | | envoy.connectTimeoutSeconds | int | `2` | Time in seconds after which a TCP connection attempt times out |
| envoy.dnsPolicy | string | `nil` | DNS policy for Cilium envoy pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy | | envoy.dnsPolicy | string | `nil` | DNS policy for Cilium envoy pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
| envoy.enabled | bool | `false` | Enable Envoy Proxy in standalone DaemonSet. | | envoy.enabled | bool | `false` | Enable Envoy Proxy in standalone DaemonSet. |
@@ -312,7 +312,7 @@ contributors across the globe, there is almost always someone available to help.
| envoy.extraVolumes | list | `[]` | Additional envoy volumes. | | envoy.extraVolumes | list | `[]` | Additional envoy volumes. |
| envoy.healthPort | int | `9878` | TCP port for the health API. | | envoy.healthPort | int | `9878` | TCP port for the health API. |
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s | | envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
| envoy.image | object | `{"digest":"sha256:992998398dadfff7117bfa9fdb7c9474fefab7f0237263f7c8114e106c67baca","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.26.6-ad82c7c56e88989992fd25d8d67747de865c823b","useDigest":true}` | Envoy container image. | | envoy.image | object | `{"digest":"sha256:39b75548447978230dedcf25da8940e4d3540c741045ef391a8e74dbb9661a86","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.26.7-bbde4095997ea57ead209f56158790d47224a0f5","useDigest":true}` | Envoy container image. |
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe | | envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
| envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe | | envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
| envoy.log.format | string | `"[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v"` | The format string to use for laying out the log message metadata of Envoy. | | envoy.log.format | string | `"[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v"` | The format string to use for laying out the log message metadata of Envoy. |
@@ -324,14 +324,15 @@ contributors across the globe, there is almost always someone available to help.
| envoy.podLabels | object | `{}` | Labels to be added to envoy pods | | envoy.podLabels | object | `{}` | Labels to be added to envoy pods |
| envoy.podSecurityContext | object | `{}` | Security Context for cilium-envoy pods. | | envoy.podSecurityContext | object | `{}` | Security Context for cilium-envoy pods. |
| envoy.priorityClassName | string | `nil` | The priority class to use for cilium-envoy. | | envoy.priorityClassName | string | `nil` | The priority class to use for cilium-envoy. |
| envoy.prometheus | object | `{"enabled":true,"port":"9964","serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]}}` | Configure Cilium Envoy Prometheus options. Note that some of these apply to either cilium-agent or cilium-envoy. |
| envoy.prometheus.enabled | bool | `true` | Enable prometheus metrics for cilium-envoy | | envoy.prometheus.enabled | bool | `true` | Enable prometheus metrics for cilium-envoy |
| envoy.prometheus.port | string | `"9964"` | Serve prometheus metrics for cilium-envoy on the configured port | | envoy.prometheus.port | string | `"9964"` | Serve prometheus metrics for cilium-envoy on the configured port |
| envoy.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-envoy | | envoy.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-envoy |
| envoy.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) | | envoy.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) Note that this setting applies to both cilium-envoy _and_ cilium-agent with Envoy enabled. |
| envoy.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. | | envoy.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
| envoy.prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-envoy | | envoy.prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-envoy |
| envoy.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-envoy | | envoy.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-envoy or for cilium-agent with Envoy configured. |
| envoy.prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-envoy | | envoy.prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-envoy or for cilium-agent with Envoy configured. |
| envoy.readinessProbe.failureThreshold | int | `3` | failure threshold of readiness probe | | envoy.readinessProbe.failureThreshold | int | `3` | failure threshold of readiness probe |
| envoy.readinessProbe.periodSeconds | int | `30` | interval between checks of the readiness probe | | envoy.readinessProbe.periodSeconds | int | `30` | interval between checks of the readiness probe |
| envoy.resources | object | `{}` | Envoy resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | envoy.resources | object | `{}` | Envoy resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
@@ -418,7 +419,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. | | hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay | | hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay | | hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
| hubble.relay.image | object | `{"digest":"sha256:dbef89f924a927043d02b40c18e417c1ea0e8f58b44523b80fef7e3652db24d4","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.14.5","useDigest":true}` | Hubble-relay container image. | | hubble.relay.image | object | `{"digest":"sha256:f506f3c6e0a979437cde79eb781654fda4f10ddb5642cebc4dc81254cfb7eeaa","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.14.9","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. | | hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. | | hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector | | hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
@@ -475,7 +476,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.backend.extraEnv | list | `[]` | Additional hubble-ui backend environment variables. | | hubble.ui.backend.extraEnv | list | `[]` | Additional hubble-ui backend environment variables. |
| hubble.ui.backend.extraVolumeMounts | list | `[]` | Additional hubble-ui backend volumeMounts. | | hubble.ui.backend.extraVolumeMounts | list | `[]` | Additional hubble-ui backend volumeMounts. |
| hubble.ui.backend.extraVolumes | list | `[]` | Additional hubble-ui backend volumes. | | hubble.ui.backend.extraVolumes | list | `[]` | Additional hubble-ui backend volumes. |
| hubble.ui.backend.image | object | `{"digest":"sha256:1f86f3400827a0451e6332262467f894eeb7caf0eb8779bd951e2caa9d027cbe","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui-backend","tag":"v0.12.1","useDigest":true}` | Hubble-ui backend image. | | hubble.ui.backend.image | object | `{"digest":"sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui-backend","tag":"v0.13.0","useDigest":true}` | Hubble-ui backend image. |
| hubble.ui.backend.resources | object | `{}` | Resource requests and limits for the 'backend' container of the 'hubble-ui' deployment. | | hubble.ui.backend.resources | object | `{}` | Resource requests and limits for the 'backend' container of the 'hubble-ui' deployment. |
| hubble.ui.backend.securityContext | object | `{}` | Hubble-ui backend security context. | | hubble.ui.backend.securityContext | object | `{}` | Hubble-ui backend security context. |
| hubble.ui.baseUrl | string | `"/"` | Defines base url prefix for all hubble-ui http requests. It needs to be changed in case if ingress for hubble-ui is configured under some sub-path. Trailing `/` is required for custom path, ex. `/service-map/` | | hubble.ui.baseUrl | string | `"/"` | Defines base url prefix for all hubble-ui http requests. It needs to be changed in case if ingress for hubble-ui is configured under some sub-path. Trailing `/` is required for custom path, ex. `/service-map/` |
@@ -483,7 +484,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.frontend.extraEnv | list | `[]` | Additional hubble-ui frontend environment variables. | | hubble.ui.frontend.extraEnv | list | `[]` | Additional hubble-ui frontend environment variables. |
| hubble.ui.frontend.extraVolumeMounts | list | `[]` | Additional hubble-ui frontend volumeMounts. | | hubble.ui.frontend.extraVolumeMounts | list | `[]` | Additional hubble-ui frontend volumeMounts. |
| hubble.ui.frontend.extraVolumes | list | `[]` | Additional hubble-ui frontend volumes. | | hubble.ui.frontend.extraVolumes | list | `[]` | Additional hubble-ui frontend volumes. |
| hubble.ui.frontend.image | object | `{"digest":"sha256:9e5f81ee747866480ea1ac4630eb6975ff9227f9782b7c93919c081c33f38267","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui","tag":"v0.12.1","useDigest":true}` | Hubble-ui frontend image. | | hubble.ui.frontend.image | object | `{"digest":"sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui","tag":"v0.13.0","useDigest":true}` | Hubble-ui frontend image. |
| hubble.ui.frontend.resources | object | `{}` | Resource requests and limits for the 'frontend' container of the 'hubble-ui' deployment. | | hubble.ui.frontend.resources | object | `{}` | Resource requests and limits for the 'frontend' container of the 'hubble-ui' deployment. |
| hubble.ui.frontend.securityContext | object | `{}` | Hubble-ui frontend security context. | | hubble.ui.frontend.securityContext | object | `{}` | Hubble-ui frontend security context. |
| hubble.ui.frontend.server.ipv6 | object | `{"enabled":true}` | Controls server listener for ipv6 | | hubble.ui.frontend.server.ipv6 | object | `{"enabled":true}` | Controls server listener for ipv6 |
@@ -510,7 +511,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. | | hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd` or `kvstore`). | | identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd` or `kvstore`). |
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. | | identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
| image | object | `{"digest":"sha256:d3b287029755b6a47dee01420e2ea469469f1b174a2089c10af7e5e9289ef05b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.5","useDigest":true}` | Agent container image. | | image | object | `{"digest":"sha256:4ef1eb7a3bc39d0fefe14685e6c0d4e01301c40df2a89bc93ffca9a1ab927301","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.9","useDigest":true}` | Agent container image. |
| imagePullSecrets | string | `nil` | Configure image pull secrets for pulling container images | | imagePullSecrets | string | `nil` | Configure image pull secrets for pulling container images |
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set | | ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. | | ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
@@ -618,7 +619,7 @@ contributors across the globe, there is almost always someone available to help.
| operator.extraVolumes | list | `[]` | Additional cilium-operator volumes. | | operator.extraVolumes | list | `[]` | Additional cilium-operator volumes. |
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. | | operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. | | operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
| operator.image | object | `{"alibabacloudDigest":"sha256:e0152c498ba73c56a82eee2a706c8f400e9a6999c665af31a935bdf08e659bc3","awsDigest":"sha256:785ccf1267d0ed3ba9e4bd8166577cb4f9e4ce996af26b27c9d5c554a0d5b09a","azureDigest":"sha256:9203f5583aa34e716d7a6588ebd144e43ce3b77873f578fc12b2679e33591353","genericDigest":"sha256:303f9076bdc73b3fc32aaedee64a14f6f44c8bb08ee9e3956d443021103ebe7a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.14.5","useDigest":true}` | cilium-operator image. | | operator.image | object | `{"alibabacloudDigest":"sha256:765314779093b54750f83280f009229f20fe1f28466a633d9bb4143d2ad669c5","awsDigest":"sha256:041ad5b49ae63ba0f1974e1a1d9ebf9f52541cd2813088fa687f9d544125a1ec","azureDigest":"sha256:2d3b9d868eb03fa9256d34192a734a2abab283f527a9c97b7cefcd3401649d17","genericDigest":"sha256:1552d653870dd8ebbd16ee985a5497dd78a2097370978b0cfbd2da2072f30712","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.14.9","useDigest":true}` | cilium-operator image. |
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. | | operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector | | operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods | | operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
@@ -665,7 +666,7 @@ contributors across the globe, there is almost always someone available to help.
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. | | preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. | | preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. | | preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
| preflight.image | object | `{"digest":"sha256:d3b287029755b6a47dee01420e2ea469469f1b174a2089c10af7e5e9289ef05b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.5","useDigest":true}` | Cilium pre-flight image. | | preflight.image | object | `{"digest":"sha256:4ef1eb7a3bc39d0fefe14685e6c0d4e01301c40df2a89bc93ffca9a1ab927301","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.9","useDigest":true}` | Cilium pre-flight image. |
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector | | preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods | | preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ | | preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |

View File

@@ -11,9 +11,9 @@ set -o nounset
# dependencies on anything that is part of the startup script # dependencies on anything that is part of the startup script
# itself, and can be safely run multiple times per node (e.g. in # itself, and can be safely run multiple times per node (e.g. in
# case of a restart). # case of a restart).
if [[ "$(iptables-save | grep -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]]; if [[ "$(iptables-save | grep -E -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
then then
echo 'Deleting iptables rules created by the AWS CNI VPC plugin' echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
iptables-save | grep -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
fi fi
echo 'Done!' echo 'Done!'

View File

@@ -100,7 +100,7 @@ then
# Since that version containerd no longer allows missing configuration for the CNI, # Since that version containerd no longer allows missing configuration for the CNI,
# not even for pods with hostNetwork set to true. Thus, we add a temporary one. # not even for pods with hostNetwork set to true. Thus, we add a temporary one.
# This will be replaced with the real config by the agent pod. # This will be replaced with the real config by the agent pod.
echo -e "{\n\t"cniVersion": "0.3.1",\n\t"name": "cilium",\n\t"type": "cilium-cni"\n}" > /etc/cni/net.d/05-cilium.conf echo -e '{\n\t"cniVersion": "0.3.1",\n\t"name": "cilium",\n\t"type": "cilium-cni"\n}' > /etc/cni/net.d/05-cilium.conf
fi fi
# Start containerd. It won't create it's CNI configuration file anymore. # Start containerd. It won't create it's CNI configuration file anymore.

View File

@@ -447,6 +447,9 @@ spec:
volumeMounts: volumeMounts:
- name: tmp - name: tmp
mountPath: /tmp mountPath: /tmp
{{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
terminationMessagePolicy: FallbackToLogsOnError terminationMessagePolicy: FallbackToLogsOnError
{{- if .Values.cgroup.autoMount.enabled }} {{- if .Values.cgroup.autoMount.enabled }}
# Required to mount cgroup2 filesystem on the underlying Kubernetes node. # Required to mount cgroup2 filesystem on the underlying Kubernetes node.

View File

@@ -34,6 +34,20 @@ spec:
metricRelabelings: metricRelabelings:
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- if .Values.envoy.prometheus.serviceMonitor.enabled }}
- port: envoy-metrics
interval: {{ .Values.envoy.prometheus.serviceMonitor.interval | quote }}
honorLabels: true
path: /metrics
{{- with .Values.envoy.prometheus.serviceMonitor.relabelings }}
relabelings:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.envoy.prometheus.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
targetLabels: targetLabels:
- k8s-app - k8s-app
{{- end }} {{- end }}

View File

@@ -13,6 +13,7 @@
{{- $fragmentTracking := "true" -}} {{- $fragmentTracking := "true" -}}
{{- $defaultKubeProxyReplacement := "false" -}} {{- $defaultKubeProxyReplacement := "false" -}}
{{- $azureUsePrimaryAddress := "true" -}} {{- $azureUsePrimaryAddress := "true" -}}
{{- $defaultDNSProxyEnableTransparentMode := "false" -}}
{{- /* Default values when 1.8 was initially deployed */ -}} {{- /* Default values when 1.8 was initially deployed */ -}}
{{- if semverCompare ">=1.8" (default "1.8" .Values.upgradeCompatibility) -}} {{- if semverCompare ">=1.8" (default "1.8" .Values.upgradeCompatibility) -}}
@@ -48,6 +49,7 @@
{{- $azureUsePrimaryAddress = "false" -}} {{- $azureUsePrimaryAddress = "false" -}}
{{- end }} {{- end }}
{{- $defaultKubeProxyReplacement = "disabled" -}} {{- $defaultKubeProxyReplacement = "disabled" -}}
{{- $defaultDNSProxyEnableTransparentMode = "true" -}}
{{- end -}} {{- end -}}
{{- /* Default values when 1.14 was initially deployed */ -}} {{- /* Default values when 1.14 was initially deployed */ -}}
@@ -430,10 +432,16 @@ data:
# - vxlan (default) # - vxlan (default)
# - geneve # - geneve
{{- if .Values.gke.enabled }} {{- if .Values.gke.enabled }}
{{- if ne (.Values.routingMode | default "native") "native" }}
{{- fail (printf "RoutingMode must be set to native when gke.enabled=true" )}}
{{- end }}
routing-mode: "native" routing-mode: "native"
enable-endpoint-routes: "true" enable-endpoint-routes: "true"
enable-local-node-route: "false" enable-local-node-route: "false"
{{- else if .Values.aksbyocni.enabled }} {{- else if .Values.aksbyocni.enabled }}
{{- if ne (.Values.routingMode | default "tunnel") "tunnel" }}
{{- fail (printf "RoutingMode must be set to tunnel when aksbyocni.enabled=true" )}}
{{- end }}
routing-mode: "tunnel" routing-mode: "tunnel"
tunnel-protocol: "vxlan" tunnel-protocol: "vxlan"
{{- else if .Values.routingMode }} {{- else if .Values.routingMode }}
@@ -1092,6 +1100,13 @@ data:
{{- end }} {{- end }}
{{- if .Values.dnsProxy }} {{- if .Values.dnsProxy }}
{{- if hasKey .Values.dnsProxy "enableTransparentMode" }}
# explicit setting gets precedence
dnsproxy-enable-transparent-mode: {{ .Values.dnsProxy.enableTransparentMode | quote }}
{{- else if eq $cniChainingMode "none" }}
# default DNS proxy to transparent mode in non-chaining modes
dnsproxy-enable-transparent-mode: {{ $defaultDNSProxyEnableTransparentMode | quote }}
{{- end }}
{{- if .Values.dnsProxy.dnsRejectResponseCode }} {{- if .Values.dnsProxy.dnsRejectResponseCode }}
tofqdns-dns-reject-response-code: {{ .Values.dnsProxy.dnsRejectResponseCode | quote }} tofqdns-dns-reject-response-code: {{ .Values.dnsProxy.dnsRejectResponseCode | quote }}
{{- end }} {{- end }}

View File

@@ -82,7 +82,7 @@ spec:
{{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.Version }} {{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.Version }}
startupProbe: startupProbe:
httpGet: httpGet:
host: "localhost" host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz path: /healthz
port: {{ .Values.envoy.healthPort }} port: {{ .Values.envoy.healthPort }}
scheme: HTTP scheme: HTTP
@@ -92,7 +92,7 @@ spec:
{{- end }} {{- end }}
livenessProbe: livenessProbe:
httpGet: httpGet:
host: "localhost" host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz path: /healthz
port: {{ .Values.envoy.healthPort }} port: {{ .Values.envoy.healthPort }}
scheme: HTTP scheme: HTTP
@@ -110,7 +110,7 @@ spec:
timeoutSeconds: 5 timeoutSeconds: 5
readinessProbe: readinessProbe:
httpGet: httpGet:
host: "localhost" host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz path: /healthz
port: {{ .Values.envoy.healthPort }} port: {{ .Values.envoy.healthPort }}
scheme: HTTP scheme: HTTP

View File

@@ -7,6 +7,7 @@ metadata:
namespace: {{ .Values.envoy.prometheus.serviceMonitor.namespace | default .Release.Namespace }} namespace: {{ .Values.envoy.prometheus.serviceMonitor.namespace | default .Release.Namespace }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-envoy
{{- with .Values.envoy.prometheus.serviceMonitor.labels }} {{- with .Values.envoy.prometheus.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
@@ -22,7 +23,7 @@ spec:
matchNames: matchNames:
- {{ .Release.Namespace }} - {{ .Release.Namespace }}
endpoints: endpoints:
- port: metrics - port: envoy-metrics
interval: {{ .Values.envoy.prometheus.serviceMonitor.interval | quote }} interval: {{ .Values.envoy.prometheus.serviceMonitor.interval | quote }}
honorLabels: true honorLabels: true
path: /metrics path: /metrics

View File

@@ -66,8 +66,13 @@ spec:
- /tmp/ready - /tmp/ready
initialDelaySeconds: 5 initialDelaySeconds: 5
periodSeconds: 5 periodSeconds: 5
{{- with .Values.preflight.extraEnv }}
env: env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
{{- with .Values.preflight.extraEnv }}
{{- toYaml . | trim | nindent 12 }} {{- toYaml . | trim | nindent 12 }}
{{- end }} {{- end }}
volumeMounts: volumeMounts:

View File

@@ -88,10 +88,12 @@ spec:
nodeSelector: nodeSelector:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}
{{- end }} {{- end }}
{{- with .Values.authentication.mutual.spire.install.agent.tolerations }}
tolerations: tolerations:
{{- with .Values.authentication.mutual.spire.install.agent.tolerations }}
{{- toYaml . | trim | nindent 8 }} {{- toYaml . | trim | nindent 8 }}
{{- end }} {{- end }}
- key: {{ .Values.agentNotReadyTaintKey | default "node.cilium.io/agent-not-ready" }}
effect: NoSchedule
volumes: volumes:
- name: spire-config - name: spire-config
configMap: configMap:

View File

@@ -143,10 +143,10 @@ rollOutCiliumPods: false
image: image:
override: ~ override: ~
repository: "quay.io/cilium/cilium" repository: "quay.io/cilium/cilium"
tag: "v1.14.5" tag: "v1.14.9"
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
# cilium-digest # cilium-digest
digest: "sha256:d3b287029755b6a47dee01420e2ea469469f1b174a2089c10af7e5e9289ef05b" digest: "sha256:4ef1eb7a3bc39d0fefe14685e6c0d4e01301c40df2a89bc93ffca9a1ab927301"
useDigest: true useDigest: true
# -- Affinity for cilium-agent. # -- Affinity for cilium-agent.
@@ -1109,9 +1109,9 @@ hubble:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/hubble-relay" repository: "quay.io/cilium/hubble-relay"
tag: "v1.14.5" tag: "v1.14.9"
# hubble-relay-digest # hubble-relay-digest
digest: "sha256:dbef89f924a927043d02b40c18e417c1ea0e8f58b44523b80fef7e3652db24d4" digest: "sha256:f506f3c6e0a979437cde79eb781654fda4f10ddb5642cebc4dc81254cfb7eeaa"
useDigest: true useDigest: true
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
@@ -1337,8 +1337,8 @@ hubble:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/hubble-ui-backend" repository: "quay.io/cilium/hubble-ui-backend"
tag: "v0.12.1" tag: "v0.13.0"
digest: "sha256:1f86f3400827a0451e6332262467f894eeb7caf0eb8779bd951e2caa9d027cbe" digest: "sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803"
useDigest: true useDigest: true
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
@@ -1368,8 +1368,8 @@ hubble:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/hubble-ui" repository: "quay.io/cilium/hubble-ui"
tag: "v0.12.1" tag: "v0.13.0"
digest: "sha256:9e5f81ee747866480ea1ac4630eb6975ff9227f9782b7c93919c081c33f38267" digest: "sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666"
useDigest: true useDigest: true
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
@@ -1853,9 +1853,9 @@ envoy:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/cilium-envoy" repository: "quay.io/cilium/cilium-envoy"
tag: "v1.26.6-ad82c7c56e88989992fd25d8d67747de865c823b" tag: "v1.26.7-bbde4095997ea57ead209f56158790d47224a0f5"
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
digest: "sha256:992998398dadfff7117bfa9fdb7c9474fefab7f0237263f7c8114e106c67baca" digest: "sha256:39b75548447978230dedcf25da8940e4d3540c741045ef391a8e74dbb9661a86"
useDigest: true useDigest: true
# -- Additional containers added to the cilium Envoy DaemonSet. # -- Additional containers added to the cilium Envoy DaemonSet.
@@ -1968,7 +1968,20 @@ envoy:
labelSelector: labelSelector:
matchLabels: matchLabels:
k8s-app: cilium-envoy k8s-app: cilium-envoy
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
k8s-app: cilium
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cilium.io/no-schedule
operator: NotIn
values:
- "true"
# -- Node selector for cilium-envoy. # -- Node selector for cilium-envoy.
nodeSelector: nodeSelector:
kubernetes.io/os: linux kubernetes.io/os: linux
@@ -1989,12 +2002,16 @@ envoy:
# Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy # Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
dnsPolicy: ~ dnsPolicy: ~
# -- Configure Cilium Envoy Prometheus options.
# Note that some of these apply to either cilium-agent or cilium-envoy.
prometheus: prometheus:
# -- Enable prometheus metrics for cilium-envoy # -- Enable prometheus metrics for cilium-envoy
enabled: true enabled: true
serviceMonitor: serviceMonitor:
# -- Enable service monitors. # -- Enable service monitors.
# This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) # This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)
# Note that this setting applies to both cilium-envoy _and_ cilium-agent
# with Envoy enabled.
enabled: false enabled: false
# -- Labels to add to ServiceMonitor cilium-envoy # -- Labels to add to ServiceMonitor cilium-envoy
labels: {} labels: {}
@@ -2006,12 +2023,14 @@ envoy:
# service monitors configured. # service monitors configured.
# namespace: "" # namespace: ""
# -- Relabeling configs for the ServiceMonitor cilium-envoy # -- Relabeling configs for the ServiceMonitor cilium-envoy
# or for cilium-agent with Envoy configured.
relabelings: relabelings:
- sourceLabels: - sourceLabels:
- __meta_kubernetes_pod_node_name - __meta_kubernetes_pod_node_name
targetLabel: node targetLabel: node
replacement: ${1} replacement: ${1}
# -- Metrics relabeling configs for the ServiceMonitor cilium-envoy # -- Metrics relabeling configs for the ServiceMonitor cilium-envoy
# or for cilium-agent with Envoy configured.
metricRelabelings: ~ metricRelabelings: ~
# -- Serve prometheus metrics for cilium-envoy on the configured port # -- Serve prometheus metrics for cilium-envoy on the configured port
port: "9964" port: "9964"
@@ -2250,15 +2269,15 @@ operator:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/operator" repository: "quay.io/cilium/operator"
tag: "v1.14.5" tag: "v1.14.9"
# operator-generic-digest # operator-generic-digest
genericDigest: "sha256:303f9076bdc73b3fc32aaedee64a14f6f44c8bb08ee9e3956d443021103ebe7a" genericDigest: "sha256:1552d653870dd8ebbd16ee985a5497dd78a2097370978b0cfbd2da2072f30712"
# operator-azure-digest # operator-azure-digest
azureDigest: "sha256:9203f5583aa34e716d7a6588ebd144e43ce3b77873f578fc12b2679e33591353" azureDigest: "sha256:2d3b9d868eb03fa9256d34192a734a2abab283f527a9c97b7cefcd3401649d17"
# operator-aws-digest # operator-aws-digest
awsDigest: "sha256:785ccf1267d0ed3ba9e4bd8166577cb4f9e4ce996af26b27c9d5c554a0d5b09a" awsDigest: "sha256:041ad5b49ae63ba0f1974e1a1d9ebf9f52541cd2813088fa687f9d544125a1ec"
# operator-alibabacloud-digest # operator-alibabacloud-digest
alibabacloudDigest: "sha256:e0152c498ba73c56a82eee2a706c8f400e9a6999c665af31a935bdf08e659bc3" alibabacloudDigest: "sha256:765314779093b54750f83280f009229f20fe1f28466a633d9bb4143d2ad669c5"
useDigest: true useDigest: true
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
suffix: "" suffix: ""
@@ -2535,9 +2554,9 @@ preflight:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/cilium" repository: "quay.io/cilium/cilium"
tag: "v1.14.5" tag: "v1.14.9"
# cilium-digest # cilium-digest
digest: "sha256:d3b287029755b6a47dee01420e2ea469469f1b174a2089c10af7e5e9289ef05b" digest: "sha256:4ef1eb7a3bc39d0fefe14685e6c0d4e01301c40df2a89bc93ffca9a1ab927301"
useDigest: true useDigest: true
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
@@ -2685,9 +2704,9 @@ clustermesh:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/clustermesh-apiserver" repository: "quay.io/cilium/clustermesh-apiserver"
tag: "v1.14.5" tag: "v1.14.9"
# clustermesh-apiserver-digest # clustermesh-apiserver-digest
digest: "sha256:7eaa35cf5452c43b1f7d0cde0d707823ae7e49965bcb54c053e31ea4e04c3d96" digest: "sha256:5c16f8b8e22ce41e11998e70846fbcecea3a6b683a38253809ead8d871f6d8a3"
useDigest: true useDigest: true
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
@@ -2732,9 +2751,9 @@ clustermesh:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/kvstoremesh" repository: "quay.io/cilium/kvstoremesh"
tag: "v1.14.5" tag: "v1.14.9"
# kvstoremesh-digest # kvstoremesh-digest
digest: "sha256:d7137edd0efa2b1407b20088af3980a9993bb616d85bf9b55ea2891d1b99023a" digest: "sha256:9d9efb25806660f3663b9cd803fb8679f2b115763470002a9770e2c1eb1e5b22"
useDigest: true useDigest: true
pullPolicy: "IfNotPresent" pullPolicy: "IfNotPresent"
@@ -3086,6 +3105,8 @@ dnsProxy:
proxyPort: 0 proxyPort: 0
# -- The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information. # -- The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information.
proxyResponseMaxDelay: 100ms proxyResponseMaxDelay: 100ms
# -- DNS proxy operation mode (true/false, or unset to use version dependent defaults)
# enableTransparentMode: true
# -- SCTP Configuration Values # -- SCTP Configuration Values
sctp: sctp:
@@ -3136,8 +3157,21 @@ authentication:
# -- SPIRE Workload Attestor kubelet verification. # -- SPIRE Workload Attestor kubelet verification.
skipKubeletVerification: true skipKubeletVerification: true
# -- SPIRE agent tolerations configuration # -- SPIRE agent tolerations configuration
# By default it follows the same tolerations as the agent itself
# to allow the Cilium agent on this node to connect to SPIRE.
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: [] tolerations:
- key: node.kubernetes.io/not-ready
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node.cloudprovider.kubernetes.io/uninitialized
effect: NoSchedule
value: "true"
- key: CriticalAddonsOnly
operator: "Exists"
# -- SPIRE agent affinity configuration # -- SPIRE agent affinity configuration
affinity: {} affinity: {}
# -- SPIRE agent nodeSelector configuration # -- SPIRE agent nodeSelector configuration

View File

@@ -1854,9 +1854,9 @@ envoy:
image: image:
override: ~ override: ~
repository: "quay.io/cilium/cilium-envoy" repository: "quay.io/cilium/cilium-envoy"
tag: "v1.26.6-ad82c7c56e88989992fd25d8d67747de865c823b" tag: "v1.26.7-bbde4095997ea57ead209f56158790d47224a0f5"
pullPolicy: "${PULL_POLICY}" pullPolicy: "${PULL_POLICY}"
digest: "sha256:992998398dadfff7117bfa9fdb7c9474fefab7f0237263f7c8114e106c67baca" digest: "sha256:39b75548447978230dedcf25da8940e4d3540c741045ef391a8e74dbb9661a86"
useDigest: true useDigest: true
# -- Additional containers added to the cilium Envoy DaemonSet. # -- Additional containers added to the cilium Envoy DaemonSet.
@@ -1969,7 +1969,20 @@ envoy:
labelSelector: labelSelector:
matchLabels: matchLabels:
k8s-app: cilium-envoy k8s-app: cilium-envoy
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
k8s-app: cilium
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cilium.io/no-schedule
operator: NotIn
values:
- "true"
# -- Node selector for cilium-envoy. # -- Node selector for cilium-envoy.
nodeSelector: nodeSelector:
kubernetes.io/os: linux kubernetes.io/os: linux
@@ -1990,12 +2003,16 @@ envoy:
# Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy # Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
dnsPolicy: ~ dnsPolicy: ~
# -- Configure Cilium Envoy Prometheus options.
# Note that some of these apply to either cilium-agent or cilium-envoy.
prometheus: prometheus:
# -- Enable prometheus metrics for cilium-envoy # -- Enable prometheus metrics for cilium-envoy
enabled: true enabled: true
serviceMonitor: serviceMonitor:
# -- Enable service monitors. # -- Enable service monitors.
# This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) # This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)
# Note that this setting applies to both cilium-envoy _and_ cilium-agent
# with Envoy enabled.
enabled: false enabled: false
# -- Labels to add to ServiceMonitor cilium-envoy # -- Labels to add to ServiceMonitor cilium-envoy
labels: {} labels: {}
@@ -2007,12 +2024,14 @@ envoy:
# service monitors configured. # service monitors configured.
# namespace: "" # namespace: ""
# -- Relabeling configs for the ServiceMonitor cilium-envoy # -- Relabeling configs for the ServiceMonitor cilium-envoy
# or for cilium-agent with Envoy configured.
relabelings: relabelings:
- sourceLabels: - sourceLabels:
- __meta_kubernetes_pod_node_name - __meta_kubernetes_pod_node_name
targetLabel: node targetLabel: node
replacement: ${1} replacement: ${1}
# -- Metrics relabeling configs for the ServiceMonitor cilium-envoy # -- Metrics relabeling configs for the ServiceMonitor cilium-envoy
# or for cilium-agent with Envoy configured.
metricRelabelings: ~ metricRelabelings: ~
# -- Serve prometheus metrics for cilium-envoy on the configured port # -- Serve prometheus metrics for cilium-envoy on the configured port
port: "9964" port: "9964"
@@ -3089,6 +3108,8 @@ dnsProxy:
proxyPort: 0 proxyPort: 0
# -- The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information. # -- The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information.
proxyResponseMaxDelay: 100ms proxyResponseMaxDelay: 100ms
# -- DNS proxy operation mode (true/false, or unset to use version dependent defaults)
# enableTransparentMode: true
# -- SCTP Configuration Values # -- SCTP Configuration Values
sctp: sctp:
@@ -3139,8 +3160,21 @@ authentication:
# -- SPIRE Workload Attestor kubelet verification. # -- SPIRE Workload Attestor kubelet verification.
skipKubeletVerification: true skipKubeletVerification: true
# -- SPIRE agent tolerations configuration # -- SPIRE agent tolerations configuration
# By default it follows the same tolerations as the agent itself
# to allow the Cilium agent on this node to connect to SPIRE.
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: [] tolerations:
- key: node.kubernetes.io/not-ready
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node.cloudprovider.kubernetes.io/uninitialized
effect: NoSchedule
value: "true"
- key: CriticalAddonsOnly
operator: "Exists"
# -- SPIRE agent affinity configuration # -- SPIRE agent affinity configuration
affinity: {} affinity: {}
# -- SPIRE agent nodeSelector configuration # -- SPIRE agent nodeSelector configuration

View File

@@ -3,11 +3,10 @@ cilium:
enabled: false enabled: false
externalIPs: externalIPs:
enabled: true enabled: true
tunnel: disabled
autoDirectNodeRoutes: false autoDirectNodeRoutes: false
kubeProxyReplacement: strict kubeProxyReplacement: strict
bpf: bpf:
masquerade: true masquerade: false
loadBalancer: loadBalancer:
algorithm: maglev algorithm: maglev
cgroup: cgroup:
@@ -25,3 +24,4 @@ cilium:
configMap: cni-configuration configMap: cni-configuration
routingMode: native routingMode: native
enableIPv4Masquerade: false enableIPv4Masquerade: false
enableIdentityMark: false

View File

@@ -1,2 +1,2 @@
name: cozy-dashboard name: cozy-dashboard
version: 1.0.0 version: 0.2.0

View File

@@ -3,7 +3,7 @@ NAMESPACE=cozy-dashboard
PUSH := 1 PUSH := 1
LOAD := 0 LOAD := 0
REPOSITORY := ghcr.io/aenix-io/cozystack REPOSITORY := ghcr.io/aenix-io/cozystack
TAG := v0.1.0 TAG := v0.2.0
show: show:
helm template --dry-run=server -n $(NAMESPACE) $(NAME) . helm template --dry-run=server -n $(NAMESPACE) $(NAME) .

View File

@@ -22,3 +22,5 @@
.project .project
.idea/ .idea/
*.tmproj *.tmproj
# img folder
img/

View File

@@ -1,12 +1,12 @@
dependencies: dependencies:
- name: redis - name: redis
repository: oci://registry-1.docker.io/bitnamicharts repository: oci://registry-1.docker.io/bitnamicharts
version: 18.4.0 version: 18.19.2
- name: postgresql - name: postgresql
repository: oci://registry-1.docker.io/bitnamicharts repository: oci://registry-1.docker.io/bitnamicharts
version: 13.2.14 version: 13.4.6
- name: common - name: common
repository: oci://registry-1.docker.io/bitnamicharts repository: oci://registry-1.docker.io/bitnamicharts
version: 2.13.3 version: 2.19.0
digest: sha256:7bede05a463745ea72d332aaaf406d84e335d8af09dce403736f4e4e14c3554d digest: sha256:b4965a22517e61212e78abb8d1cbe86e800c8664b3139e2047f4bd62b3e55b24
generated: "2023-11-21T18:18:20.024990735Z" generated: "2024-03-13T11:51:34.216594+01:00"

View File

@@ -2,21 +2,21 @@ annotations:
category: Infrastructure category: Infrastructure
images: | images: |
- name: kubeapps-apis - name: kubeapps-apis
image: docker.io/bitnami/kubeapps-apis:2.9.0-debian-11-r13 image: docker.io/bitnami/kubeapps-apis:2.9.0-debian-12-r19
- name: kubeapps-apprepository-controller - name: kubeapps-apprepository-controller
image: docker.io/bitnami/kubeapps-apprepository-controller:2.9.0-debian-11-r12 image: docker.io/bitnami/kubeapps-apprepository-controller:2.9.0-debian-12-r18
- name: kubeapps-asset-syncer - name: kubeapps-asset-syncer
image: docker.io/bitnami/kubeapps-asset-syncer:2.9.0-debian-11-r13 image: docker.io/bitnami/kubeapps-asset-syncer:2.9.0-debian-12-r19
- name: kubeapps-oci-catalog
image: docker.io/bitnami/kubeapps-oci-catalog:2.9.0-debian-11-r6
- name: kubeapps-pinniped-proxy
image: docker.io/bitnami/kubeapps-pinniped-proxy:2.9.0-debian-11-r10
- name: kubeapps-dashboard - name: kubeapps-dashboard
image: docker.io/bitnami/kubeapps-dashboard:2.9.0-debian-11-r16 image: docker.io/bitnami/kubeapps-dashboard:2.9.0-debian-12-r18
- name: kubeapps-oci-catalog
image: docker.io/bitnami/kubeapps-oci-catalog:2.9.0-debian-12-r17
- name: kubeapps-pinniped-proxy
image: docker.io/bitnami/kubeapps-pinniped-proxy:2.9.0-debian-12-r17
- name: nginx - name: nginx
image: docker.io/bitnami/nginx:1.25.3-debian-11-r1 image: docker.io/bitnami/nginx:1.25.4-debian-12-r3
- name: oauth2-proxy - name: oauth2-proxy
image: docker.io/bitnami/oauth2-proxy:7.5.1-debian-11-r11 image: docker.io/bitnami/oauth2-proxy:7.6.0-debian-12-r4
licenses: Apache-2.0 licenses: Apache-2.0
apiVersion: v2 apiVersion: v2
appVersion: 2.9.0 appVersion: 2.9.0
@@ -51,4 +51,4 @@ maintainers:
name: kubeapps name: kubeapps
sources: sources:
- https://github.com/bitnami/charts/tree/main/bitnami/kubeapps - https://github.com/bitnami/charts/tree/main/bitnami/kubeapps
version: 14.1.2 version: 14.7.2

View File

@@ -62,10 +62,11 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
### Global parameters ### Global parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------- | ----------------------------------------------- | ----- | | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `global.imageRegistry` | Global Docker image registry | `""` | | `global.imageRegistry` | Global Docker image registry | `""` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` | | `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` | | `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` |
| `global.compatibility.openshift.adaptSecurityContext` | Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | `disabled` |
### Common parameters ### Common parameters
@@ -112,7 +113,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
### Frontend parameters ### Frontend parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------- | ----------------------- | | ------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- |
| `frontend.image.registry` | NGINX image registry | `REGISTRY_NAME` | | `frontend.image.registry` | NGINX image registry | `REGISTRY_NAME` |
| `frontend.image.repository` | NGINX image repository | `REPOSITORY_NAME/nginx` | | `frontend.image.repository` | NGINX image repository | `REPOSITORY_NAME/nginx` |
| `frontend.image.digest` | NGINX image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` | | `frontend.image.digest` | NGINX image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
@@ -124,6 +125,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `frontend.largeClientHeaderBuffers` | Set large_client_header_buffers in NGINX config | `4 32k` | | `frontend.largeClientHeaderBuffers` | Set large_client_header_buffers in NGINX config | `4 32k` |
| `frontend.replicaCount` | Number of frontend replicas to deploy | `2` | | `frontend.replicaCount` | Number of frontend replicas to deploy | `2` |
| `frontend.updateStrategy.type` | Frontend deployment strategy type. | `RollingUpdate` | | `frontend.updateStrategy.type` | Frontend deployment strategy type. | `RollingUpdate` |
| `frontend.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if frontend.resources is set (frontend.resources is recommended for production). | `none` |
| `frontend.resources.limits.cpu` | The CPU limits for the NGINX container | `250m` | | `frontend.resources.limits.cpu` | The CPU limits for the NGINX container | `250m` |
| `frontend.resources.limits.memory` | The memory limits for the NGINX container | `128Mi` | | `frontend.resources.limits.memory` | The memory limits for the NGINX container | `128Mi` |
| `frontend.resources.requests.cpu` | The requested CPU for the NGINX container | `25m` | | `frontend.resources.requests.cpu` | The requested CPU for the NGINX container | `25m` |
@@ -133,9 +135,14 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `frontend.extraEnvVarsSecret` | Name of existing Secret containing extra env vars for the NGINX container | `""` | | `frontend.extraEnvVarsSecret` | Name of existing Secret containing extra env vars for the NGINX container | `""` |
| `frontend.containerPorts.http` | NGINX HTTP container port | `8080` | | `frontend.containerPorts.http` | NGINX HTTP container port | `8080` |
| `frontend.podSecurityContext.enabled` | Enabled frontend pods' Security Context | `true` | | `frontend.podSecurityContext.enabled` | Enabled frontend pods' Security Context | `true` |
| `frontend.podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `frontend.podSecurityContext.sysctls` | Set kernel settings using the sysctl interface | `[]` |
| `frontend.podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `frontend.podSecurityContext.fsGroup` | Set frontend pod's Security Context fsGroup | `1001` | | `frontend.podSecurityContext.fsGroup` | Set frontend pod's Security Context fsGroup | `1001` |
| `frontend.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` | | `frontend.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `frontend.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `frontend.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` | | `frontend.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `frontend.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `frontend.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` | | `frontend.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `frontend.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` | | `frontend.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `frontend.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` | | `frontend.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
@@ -179,6 +186,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `frontend.priorityClassName` | Priority class name for frontend pods | `""` | | `frontend.priorityClassName` | Priority class name for frontend pods | `""` |
| `frontend.schedulerName` | Name of the k8s scheduler (other than default) | `""` | | `frontend.schedulerName` | Name of the k8s scheduler (other than default) | `""` |
| `frontend.topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` | | `frontend.topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` |
| `frontend.automountServiceAccountToken` | Mount Service Account token in pod | `true` |
| `frontend.hostAliases` | Custom host aliases for frontend pods | `[]` | | `frontend.hostAliases` | Custom host aliases for frontend pods | `[]` |
| `frontend.extraVolumes` | Optionally specify extra list of additional volumes for frontend pods | `[]` | | `frontend.extraVolumes` | Optionally specify extra list of additional volumes for frontend pods | `[]` |
| `frontend.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for frontend container(s) | `[]` | | `frontend.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for frontend container(s) | `[]` |
@@ -199,7 +207,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
### Dashboard parameters ### Dashboard parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------ | | ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------ |
| `dashboard.enabled` | Specifies whether Kubeapps Dashboard should be deployed or not | `true` | | `dashboard.enabled` | Specifies whether Kubeapps Dashboard should be deployed or not | `true` |
| `dashboard.image.registry` | Dashboard image registry | `REGISTRY_NAME` | | `dashboard.image.registry` | Dashboard image registry | `REGISTRY_NAME` |
| `dashboard.image.repository` | Dashboard image repository | `REPOSITORY_NAME/kubeapps-dashboard` | | `dashboard.image.repository` | Dashboard image repository | `REPOSITORY_NAME/kubeapps-dashboard` |
@@ -221,14 +229,20 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `dashboard.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars for the Dashboard container | `""` | | `dashboard.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars for the Dashboard container | `""` |
| `dashboard.extraEnvVarsSecret` | Name of existing Secret containing extra env vars for the Dashboard container | `""` | | `dashboard.extraEnvVarsSecret` | Name of existing Secret containing extra env vars for the Dashboard container | `""` |
| `dashboard.containerPorts.http` | Dashboard HTTP container port | `8080` | | `dashboard.containerPorts.http` | Dashboard HTTP container port | `8080` |
| `dashboard.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if dashboard.resources is set (dashboard.resources is recommended for production). | `none` |
| `dashboard.resources.limits.cpu` | The CPU limits for the Dashboard container | `250m` | | `dashboard.resources.limits.cpu` | The CPU limits for the Dashboard container | `250m` |
| `dashboard.resources.limits.memory` | The memory limits for the Dashboard container | `128Mi` | | `dashboard.resources.limits.memory` | The memory limits for the Dashboard container | `128Mi` |
| `dashboard.resources.requests.cpu` | The requested CPU for the Dashboard container | `25m` | | `dashboard.resources.requests.cpu` | The requested CPU for the Dashboard container | `25m` |
| `dashboard.resources.requests.memory` | The requested memory for the Dashboard container | `32Mi` | | `dashboard.resources.requests.memory` | The requested memory for the Dashboard container | `32Mi` |
| `dashboard.podSecurityContext.enabled` | Enabled Dashboard pods' Security Context | `true` | | `dashboard.podSecurityContext.enabled` | Enabled Dashboard pods' Security Context | `true` |
| `dashboard.podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `dashboard.podSecurityContext.sysctls` | Set kernel settings using the sysctl interface | `[]` |
| `dashboard.podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `dashboard.podSecurityContext.fsGroup` | Set Dashboard pod's Security Context fsGroup | `1001` | | `dashboard.podSecurityContext.fsGroup` | Set Dashboard pod's Security Context fsGroup | `1001` |
| `dashboard.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` | | `dashboard.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `dashboard.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `dashboard.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` | | `dashboard.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `dashboard.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `dashboard.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` | | `dashboard.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `dashboard.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` | | `dashboard.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `dashboard.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` | | `dashboard.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
@@ -272,6 +286,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `dashboard.priorityClassName` | Priority class name for Dashboard pods | `""` | | `dashboard.priorityClassName` | Priority class name for Dashboard pods | `""` |
| `dashboard.schedulerName` | Name of the k8s scheduler (other than default) | `""` | | `dashboard.schedulerName` | Name of the k8s scheduler (other than default) | `""` |
| `dashboard.topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` | | `dashboard.topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` |
| `dashboard.automountServiceAccountToken` | Mount Service Account token in pod | `true` |
| `dashboard.hostAliases` | Custom host aliases for Dashboard pods | `[]` | | `dashboard.hostAliases` | Custom host aliases for Dashboard pods | `[]` |
| `dashboard.extraVolumes` | Optionally specify extra list of additional volumes for Dashboard pods | `[]` | | `dashboard.extraVolumes` | Optionally specify extra list of additional volumes for Dashboard pods | `[]` |
| `dashboard.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for Dashboard container(s) | `[]` | | `dashboard.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for Dashboard container(s) | `[]` |
@@ -283,7 +298,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
### AppRepository Controller parameters ### AppRepository Controller parameters
| Name | Description | Value | | Name | Description | Value |
| ----------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | | ----------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- |
| `apprepository.image.registry` | Kubeapps AppRepository Controller image registry | `REGISTRY_NAME` | | `apprepository.image.registry` | Kubeapps AppRepository Controller image registry | `REGISTRY_NAME` |
| `apprepository.image.repository` | Kubeapps AppRepository Controller image repository | `REPOSITORY_NAME/kubeapps-apprepository-controller` | | `apprepository.image.repository` | Kubeapps AppRepository Controller image repository | `REPOSITORY_NAME/kubeapps-apprepository-controller` |
| `apprepository.image.digest` | Kubeapps AppRepository Controller image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` | | `apprepository.image.digest` | Kubeapps AppRepository Controller image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
@@ -307,14 +322,20 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `apprepository.extraFlags` | Additional command line flags for AppRepository Controller | `[]` | | `apprepository.extraFlags` | Additional command line flags for AppRepository Controller | `[]` |
| `apprepository.replicaCount` | Number of AppRepository Controller replicas to deploy | `1` | | `apprepository.replicaCount` | Number of AppRepository Controller replicas to deploy | `1` |
| `apprepository.updateStrategy.type` | AppRepository Controller deployment strategy type. | `RollingUpdate` | | `apprepository.updateStrategy.type` | AppRepository Controller deployment strategy type. | `RollingUpdate` |
| `apprepository.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if apprepository.resources is set (apprepository.resources is recommended for production). | `none` |
| `apprepository.resources.limits.cpu` | The CPU limits for the AppRepository Controller container | `250m` | | `apprepository.resources.limits.cpu` | The CPU limits for the AppRepository Controller container | `250m` |
| `apprepository.resources.limits.memory` | The memory limits for the AppRepository Controller container | `128Mi` | | `apprepository.resources.limits.memory` | The memory limits for the AppRepository Controller container | `128Mi` |
| `apprepository.resources.requests.cpu` | The requested CPU for the AppRepository Controller container | `25m` | | `apprepository.resources.requests.cpu` | The requested CPU for the AppRepository Controller container | `25m` |
| `apprepository.resources.requests.memory` | The requested memory for the AppRepository Controller container | `32Mi` | | `apprepository.resources.requests.memory` | The requested memory for the AppRepository Controller container | `32Mi` |
| `apprepository.podSecurityContext.enabled` | Enabled AppRepository Controller pods' Security Context | `true` | | `apprepository.podSecurityContext.enabled` | Enabled AppRepository Controller pods' Security Context | `true` |
| `apprepository.podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `apprepository.podSecurityContext.sysctls` | Set kernel settings using the sysctl interface | `[]` |
| `apprepository.podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `apprepository.podSecurityContext.fsGroup` | Set AppRepository Controller pod's Security Context fsGroup | `1001` | | `apprepository.podSecurityContext.fsGroup` | Set AppRepository Controller pod's Security Context fsGroup | `1001` |
| `apprepository.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` | | `apprepository.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `apprepository.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `apprepository.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` | | `apprepository.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `apprepository.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `apprepository.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` | | `apprepository.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `apprepository.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` | | `apprepository.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `apprepository.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` | | `apprepository.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
@@ -342,18 +363,19 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `apprepository.priorityClassName` | Priority class name for AppRepository Controller pods | `""` | | `apprepository.priorityClassName` | Priority class name for AppRepository Controller pods | `""` |
| `apprepository.schedulerName` | Name of the k8s scheduler (other than default) | `""` | | `apprepository.schedulerName` | Name of the k8s scheduler (other than default) | `""` |
| `apprepository.topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` | | `apprepository.topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` |
| `apprepository.automountServiceAccountToken` | Mount Service Account token in pod | `true` |
| `apprepository.hostAliases` | Custom host aliases for AppRepository Controller pods | `[]` | | `apprepository.hostAliases` | Custom host aliases for AppRepository Controller pods | `[]` |
| `apprepository.sidecars` | Add additional sidecar containers to the AppRepository Controller pod(s) | `[]` | | `apprepository.sidecars` | Add additional sidecar containers to the AppRepository Controller pod(s) | `[]` |
| `apprepository.initContainers` | Add additional init containers to the AppRepository Controller pod(s) | `[]` | | `apprepository.initContainers` | Add additional init containers to the AppRepository Controller pod(s) | `[]` |
| `apprepository.serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` | | `apprepository.serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` |
| `apprepository.serviceAccount.name` | Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | `""` | | `apprepository.serviceAccount.name` | Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | `""` |
| `apprepository.serviceAccount.automountServiceAccountToken` | Automount service account token for the server service account | `true` | | `apprepository.serviceAccount.automountServiceAccountToken` | Automount service account token for the server service account | `false` |
| `apprepository.serviceAccount.annotations` | Annotations for service account. Evaluated as a template. Only used if `create` is `true`. | `{}` | | `apprepository.serviceAccount.annotations` | Annotations for service account. Evaluated as a template. Only used if `create` is `true`. | `{}` |
### Auth Proxy parameters ### Auth Proxy parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ | | ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------ |
| `authProxy.enabled` | Specifies whether Kubeapps should configure OAuth login/logout | `false` | | `authProxy.enabled` | Specifies whether Kubeapps should configure OAuth login/logout | `false` |
| `authProxy.image.registry` | OAuth2 Proxy image registry | `REGISTRY_NAME` | | `authProxy.image.registry` | OAuth2 Proxy image registry | `REGISTRY_NAME` |
| `authProxy.image.repository` | OAuth2 Proxy image repository | `REPOSITORY_NAME/oauth2-proxy` | | `authProxy.image.repository` | OAuth2 Proxy image repository | `REPOSITORY_NAME/oauth2-proxy` |
@@ -382,13 +404,16 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `authProxy.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Auth Proxy container(s) | `[]` | | `authProxy.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Auth Proxy container(s) | `[]` |
| `authProxy.containerPorts.proxy` | Auth Proxy HTTP container port | `3000` | | `authProxy.containerPorts.proxy` | Auth Proxy HTTP container port | `3000` |
| `authProxy.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` | | `authProxy.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `authProxy.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `authProxy.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` | | `authProxy.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `authProxy.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `authProxy.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` | | `authProxy.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `authProxy.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` | | `authProxy.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `authProxy.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` | | `authProxy.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
| `authProxy.containerSecurityContext.allowPrivilegeEscalation` | Set container's Security Context allowPrivilegeEscalation | `false` | | `authProxy.containerSecurityContext.allowPrivilegeEscalation` | Set container's Security Context allowPrivilegeEscalation | `false` |
| `authProxy.containerSecurityContext.capabilities.drop` | List of capabilities to be dropped | `["ALL"]` | | `authProxy.containerSecurityContext.capabilities.drop` | List of capabilities to be dropped | `["ALL"]` |
| `authProxy.containerSecurityContext.seccompProfile.type` | Set container's Security Context seccomp profile | `RuntimeDefault` | | `authProxy.containerSecurityContext.seccompProfile.type` | Set container's Security Context seccomp profile | `RuntimeDefault` |
| `authProxy.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if authProxy.resources is set (authProxy.resources is recommended for production). | `none` |
| `authProxy.resources.limits.cpu` | The CPU limits for the OAuth2 Proxy container | `250m` | | `authProxy.resources.limits.cpu` | The CPU limits for the OAuth2 Proxy container | `250m` |
| `authProxy.resources.limits.memory` | The memory limits for the OAuth2 Proxy container | `128Mi` | | `authProxy.resources.limits.memory` | The memory limits for the OAuth2 Proxy container | `128Mi` |
| `authProxy.resources.requests.cpu` | The requested CPU for the OAuth2 Proxy container | `25m` | | `authProxy.resources.requests.cpu` | The requested CPU for the OAuth2 Proxy container | `25m` |
@@ -397,7 +422,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
### Pinniped Proxy parameters ### Pinniped Proxy parameters
| Name | Description | Value | | Name | Description | Value |
| ----------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | | ----------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |
| `pinnipedProxy.enabled` | Specifies whether Kubeapps should configure Pinniped Proxy | `false` | | `pinnipedProxy.enabled` | Specifies whether Kubeapps should configure Pinniped Proxy | `false` |
| `pinnipedProxy.image.registry` | Pinniped Proxy image registry | `REGISTRY_NAME` | | `pinnipedProxy.image.registry` | Pinniped Proxy image registry | `REGISTRY_NAME` |
| `pinnipedProxy.image.repository` | Pinniped Proxy image repository | `REPOSITORY_NAME/kubeapps-pinniped-proxy` | | `pinnipedProxy.image.repository` | Pinniped Proxy image repository | `REPOSITORY_NAME/kubeapps-pinniped-proxy` |
@@ -419,13 +444,16 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `pinnipedProxy.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Pinniped Proxy container(s) | `[]` | | `pinnipedProxy.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Pinniped Proxy container(s) | `[]` |
| `pinnipedProxy.containerPorts.pinnipedProxy` | Pinniped Proxy container port | `3333` | | `pinnipedProxy.containerPorts.pinnipedProxy` | Pinniped Proxy container port | `3333` |
| `pinnipedProxy.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` | | `pinnipedProxy.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `pinnipedProxy.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `pinnipedProxy.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` | | `pinnipedProxy.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `pinnipedProxy.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `pinnipedProxy.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` | | `pinnipedProxy.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `pinnipedProxy.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` | | `pinnipedProxy.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `pinnipedProxy.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` | | `pinnipedProxy.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
| `pinnipedProxy.containerSecurityContext.allowPrivilegeEscalation` | Set container's Security Context allowPrivilegeEscalation | `false` | | `pinnipedProxy.containerSecurityContext.allowPrivilegeEscalation` | Set container's Security Context allowPrivilegeEscalation | `false` |
| `pinnipedProxy.containerSecurityContext.capabilities.drop` | List of capabilities to be dropped | `["ALL"]` | | `pinnipedProxy.containerSecurityContext.capabilities.drop` | List of capabilities to be dropped | `["ALL"]` |
| `pinnipedProxy.containerSecurityContext.seccompProfile.type` | Set container's Security Context seccomp profile | `RuntimeDefault` | | `pinnipedProxy.containerSecurityContext.seccompProfile.type` | Set container's Security Context seccomp profile | `RuntimeDefault` |
| `pinnipedProxy.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if pinnipedProxy.resources is set (pinnipedProxy.resources is recommended for production). | `none` |
| `pinnipedProxy.resources.limits.cpu` | The CPU limits for the Pinniped Proxy container | `250m` | | `pinnipedProxy.resources.limits.cpu` | The CPU limits for the Pinniped Proxy container | `250m` |
| `pinnipedProxy.resources.limits.memory` | The memory limits for the Pinniped Proxy container | `128Mi` | | `pinnipedProxy.resources.limits.memory` | The memory limits for the Pinniped Proxy container | `128Mi` |
| `pinnipedProxy.resources.requests.cpu` | The requested CPU for the Pinniped Proxy container | `25m` | | `pinnipedProxy.resources.requests.cpu` | The requested CPU for the Pinniped Proxy container | `25m` |
@@ -452,7 +480,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
### Database Parameters ### Database Parameters
| Name | Description | Value | | Name | Description | Value |
| ---------------------------------------- | ---------------------------------------------------------------------------- | ------------ | | ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| `postgresql.enabled` | Deploy a PostgreSQL server to satisfy the applications database requirements | `true` | | `postgresql.enabled` | Deploy a PostgreSQL server to satisfy the applications database requirements | `true` |
| `postgresql.auth.username` | Username for PostgreSQL server | `postgres` | | `postgresql.auth.username` | Username for PostgreSQL server | `postgres` |
| `postgresql.auth.postgresPassword` | Password for 'postgres' user | `""` | | `postgresql.auth.postgresPassword` | Password for 'postgres' user | `""` |
@@ -461,6 +489,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `postgresql.primary.persistence.enabled` | Enable PostgreSQL Primary data persistence using PVC | `false` | | `postgresql.primary.persistence.enabled` | Enable PostgreSQL Primary data persistence using PVC | `false` |
| `postgresql.architecture` | PostgreSQL architecture (`standalone` or `replication`) | `standalone` | | `postgresql.architecture` | PostgreSQL architecture (`standalone` or `replication`) | `standalone` |
| `postgresql.securityContext.enabled` | Enabled PostgreSQL replicas pods' Security Context | `false` | | `postgresql.securityContext.enabled` | Enabled PostgreSQL replicas pods' Security Context | `false` |
| `postgresql.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if postgresql.resources is set (postgresql.resources is recommended for production). | `none` |
| `postgresql.resources.limits` | The resources limits for the PostgreSQL container | `{}` | | `postgresql.resources.limits` | The resources limits for the PostgreSQL container | `{}` |
| `postgresql.resources.requests.cpu` | The requested CPU for the PostgreSQL container | `250m` | | `postgresql.resources.requests.cpu` | The requested CPU for the PostgreSQL container | `250m` |
| `postgresql.resources.requests.memory` | The requested memory for the PostgreSQL container | `256Mi` | | `postgresql.resources.requests.memory` | The requested memory for the PostgreSQL container | `256Mi` |
@@ -468,7 +497,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
### kubeappsapis parameters ### kubeappsapis parameters
| Name | Description | Value | | Name | Description | Value |
| ----------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | | ----------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------- |
| `kubeappsapis.enabledPlugins` | Manually override which plugins are enabled for the Kubeapps-APIs service | `[]` | | `kubeappsapis.enabledPlugins` | Manually override which plugins are enabled for the Kubeapps-APIs service | `[]` |
| `kubeappsapis.pluginConfig.core.packages.v1alpha1.versionsInSummary.major` | Number of major versions to display in the summary | `3` | | `kubeappsapis.pluginConfig.core.packages.v1alpha1.versionsInSummary.major` | Number of major versions to display in the summary | `3` |
| `kubeappsapis.pluginConfig.core.packages.v1alpha1.versionsInSummary.minor` | Number of minor versions to display in the summary | `3` | | `kubeappsapis.pluginConfig.core.packages.v1alpha1.versionsInSummary.minor` | Number of minor versions to display in the summary | `3` |
@@ -498,14 +527,20 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `kubeappsapis.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars for the KubeappsAPIs container | `""` | | `kubeappsapis.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars for the KubeappsAPIs container | `""` |
| `kubeappsapis.extraEnvVarsSecret` | Name of existing Secret containing extra env vars for the KubeappsAPIs container | `""` | | `kubeappsapis.extraEnvVarsSecret` | Name of existing Secret containing extra env vars for the KubeappsAPIs container | `""` |
| `kubeappsapis.containerPorts.http` | KubeappsAPIs HTTP container port | `50051` | | `kubeappsapis.containerPorts.http` | KubeappsAPIs HTTP container port | `50051` |
| `kubeappsapis.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if kubeappsapis.resources is set (kubeappsapis.resources is recommended for production). | `none` |
| `kubeappsapis.resources.limits.cpu` | The CPU limits for the KubeappsAPIs container | `250m` | | `kubeappsapis.resources.limits.cpu` | The CPU limits for the KubeappsAPIs container | `250m` |
| `kubeappsapis.resources.limits.memory` | The memory limits for the KubeappsAPIs container | `256Mi` | | `kubeappsapis.resources.limits.memory` | The memory limits for the KubeappsAPIs container | `256Mi` |
| `kubeappsapis.resources.requests.cpu` | The requested CPU for the KubeappsAPIs container | `25m` | | `kubeappsapis.resources.requests.cpu` | The requested CPU for the KubeappsAPIs container | `25m` |
| `kubeappsapis.resources.requests.memory` | The requested memory for the KubeappsAPIs container | `32Mi` | | `kubeappsapis.resources.requests.memory` | The requested memory for the KubeappsAPIs container | `32Mi` |
| `kubeappsapis.podSecurityContext.enabled` | Enabled KubeappsAPIs pods' Security Context | `true` | | `kubeappsapis.podSecurityContext.enabled` | Enabled KubeappsAPIs pods' Security Context | `true` |
| `kubeappsapis.podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `kubeappsapis.podSecurityContext.sysctls` | Set kernel settings using the sysctl interface | `[]` |
| `kubeappsapis.podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `kubeappsapis.podSecurityContext.fsGroup` | Set KubeappsAPIs pod's Security Context fsGroup | `1001` | | `kubeappsapis.podSecurityContext.fsGroup` | Set KubeappsAPIs pod's Security Context fsGroup | `1001` |
| `kubeappsapis.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` | | `kubeappsapis.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `kubeappsapis.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `kubeappsapis.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` | | `kubeappsapis.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `kubeappsapis.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `kubeappsapis.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` | | `kubeappsapis.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `kubeappsapis.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` | | `kubeappsapis.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `kubeappsapis.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` | | `kubeappsapis.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
@@ -551,6 +586,7 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `kubeappsapis.priorityClassName` | Priority class name for KubeappsAPIs pods | `""` | | `kubeappsapis.priorityClassName` | Priority class name for KubeappsAPIs pods | `""` |
| `kubeappsapis.schedulerName` | Name of the k8s scheduler (other than default) | `""` | | `kubeappsapis.schedulerName` | Name of the k8s scheduler (other than default) | `""` |
| `kubeappsapis.topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` | | `kubeappsapis.topologySpreadConstraints` | Topology Spread Constraints for pod assignment | `[]` |
| `kubeappsapis.automountServiceAccountToken` | Mount Service Account token in pod | `true` |
| `kubeappsapis.hostAliases` | Custom host aliases for KubeappsAPIs pods | `[]` | | `kubeappsapis.hostAliases` | Custom host aliases for KubeappsAPIs pods | `[]` |
| `kubeappsapis.sidecars` | Add additional sidecar containers to the KubeappsAPIs pod(s) | `[]` | | `kubeappsapis.sidecars` | Add additional sidecar containers to the KubeappsAPIs pod(s) | `[]` |
| `kubeappsapis.initContainers` | Add additional init containers to the KubeappsAPIs pod(s) | `[]` | | `kubeappsapis.initContainers` | Add additional init containers to the KubeappsAPIs pod(s) | `[]` |
@@ -558,13 +594,13 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `kubeappsapis.service.annotations` | Additional custom annotations for KubeappsAPIs service | `{}` | | `kubeappsapis.service.annotations` | Additional custom annotations for KubeappsAPIs service | `{}` |
| `kubeappsapis.serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` | | `kubeappsapis.serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` |
| `kubeappsapis.serviceAccount.name` | Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | `""` | | `kubeappsapis.serviceAccount.name` | Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | `""` |
| `kubeappsapis.serviceAccount.automountServiceAccountToken` | Automount service account token for the server service account | `true` | | `kubeappsapis.serviceAccount.automountServiceAccountToken` | Automount service account token for the server service account | `false` |
| `kubeappsapis.serviceAccount.annotations` | Annotations for service account. Evaluated as a template. Only used if `create` is `true`. | `{}` | | `kubeappsapis.serviceAccount.annotations` | Annotations for service account. Evaluated as a template. Only used if `create` is `true`. | `{}` |
### OCI Catalog chart configuration ### OCI Catalog chart configuration
| Name | Description | Value | | Name | Description | Value |
| -------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | -------------------------------------- | | -------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- |
| `ociCatalog.enabled` | Enable the OCI catalog gRPC service for cataloging | `false` | | `ociCatalog.enabled` | Enable the OCI catalog gRPC service for cataloging | `false` |
| `ociCatalog.image.registry` | OCI Catalog image registry | `REGISTRY_NAME` | | `ociCatalog.image.registry` | OCI Catalog image registry | `REGISTRY_NAME` |
| `ociCatalog.image.repository` | OCI Catalog image repository | `REPOSITORY_NAME/kubeapps-oci-catalog` | | `ociCatalog.image.repository` | OCI Catalog image repository | `REPOSITORY_NAME/kubeapps-oci-catalog` |
@@ -577,12 +613,15 @@ Once you have installed Kubeapps follow the [Getting Started Guide](https://gith
| `ociCatalog.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars for the OCI Catalog container | `""` | | `ociCatalog.extraEnvVarsCM` | Name of existing ConfigMap containing extra env vars for the OCI Catalog container | `""` |
| `ociCatalog.extraEnvVarsSecret` | Name of existing Secret containing extra env vars for the OCI Catalog container | `""` | | `ociCatalog.extraEnvVarsSecret` | Name of existing Secret containing extra env vars for the OCI Catalog container | `""` |
| `ociCatalog.containerPorts.grpc` | OCI Catalog gRPC container port | `50061` | | `ociCatalog.containerPorts.grpc` | OCI Catalog gRPC container port | `50061` |
| `ociCatalog.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if ociCatalog.resources is set (ociCatalog.resources is recommended for production). | `none` |
| `ociCatalog.resources.limits.cpu` | The CPU limits for the OCI Catalog container | `250m` | | `ociCatalog.resources.limits.cpu` | The CPU limits for the OCI Catalog container | `250m` |
| `ociCatalog.resources.limits.memory` | The memory limits for the OCI Catalog container | `256Mi` | | `ociCatalog.resources.limits.memory` | The memory limits for the OCI Catalog container | `256Mi` |
| `ociCatalog.resources.requests.cpu` | The requested CPU for the OCI Catalog container | `25m` | | `ociCatalog.resources.requests.cpu` | The requested CPU for the OCI Catalog container | `25m` |
| `ociCatalog.resources.requests.memory` | The requested memory for the OCI Catalog container | `32Mi` | | `ociCatalog.resources.requests.memory` | The requested memory for the OCI Catalog container | `32Mi` |
| `ociCatalog.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` | | `ociCatalog.containerSecurityContext.enabled` | Enabled containers' Security Context | `true` |
| `ociCatalog.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `ociCatalog.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` | | `ociCatalog.containerSecurityContext.runAsUser` | Set containers' Security Context runAsUser | `1001` |
| `ociCatalog.containerSecurityContext.runAsGroup` | Set containers' Security Context runAsGroup | `0` |
| `ociCatalog.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` | | `ociCatalog.containerSecurityContext.runAsNonRoot` | Set container's Security Context runAsNonRoot | `true` |
| `ociCatalog.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` | | `ociCatalog.containerSecurityContext.privileged` | Set container's Security Context privileged | `false` |
| `ociCatalog.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` | | `ociCatalog.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context readOnlyRootFilesystem | `false` |
@@ -652,6 +691,12 @@ helm install kubeapps --namespace kubeapps -f custom-values.yaml oci://REGISTRY_
## Configuration and installation details ## Configuration and installation details
### Resource requests and limits
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the `resources` value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the `resourcesPreset` values, which automatically sets the `resources` section according to different presets. Check these presets in [the bitnami/common chart](https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15). However, in production workloads using `resourcePreset` is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
### Configuring Initial Repositories ### Configuring Initial Repositories
By default, Kubeapps will track the [Bitnami Application Catalog](https://github.com/bitnami/charts). To change these defaults, override with your desired parameters the `apprepository.initialRepos` object present in the [values.yaml](https://github.com/bitnami/charts/tree/main/bitnami/kubeapps/values.yaml) file. By default, Kubeapps will track the [Bitnami Application Catalog](https://github.com/bitnami/charts). To change these defaults, override with your desired parameters the `apprepository.initialRepos` object present in the [values.yaml](https://github.com/bitnami/charts/tree/main/bitnami/kubeapps/values.yaml) file.
@@ -909,7 +954,7 @@ Feel free to [open an issue](https://github.com/vmware-tanzu/kubeapps/issues/new
This major release renames several values in this chart and adds missing features, in order to get aligned with the rest of the assets in the Bitnami charts repository. This major release renames several values in this chart and adds missing features, in order to get aligned with the rest of the assets in the Bitnami charts repository.
Additionally, it updates both the [PostgreSQL](https://github.com/bitnami/charts/tree/main/bitnami/postgresql) and the [Redis](https://github.com/bitnami/charts/tree/main/bitnami/redis) subcharts to their latest major versions, 11.0.0 and 16.0.0 respectively, where similar changes have been also performed. Additionally, it updates both the [PostgreSQL](https://github.com/bitnami/charts/tree/main/bitnami/postgresql) and the [Redis](https://github.com/bitnami/charts/tree/main/bitnami/redis) subcharts to their latest major versions, 11.0.0 and 16.0.0 respectively, where similar changes have been also performed.
Check [PostgreSQL Upgrading Notes](https://docs.bitnami.com/kubernetes/infrastructure/postgresql/administration/upgrade/#to-1100) and [Redis Upgrading Notes](https://github.com/bitnami/charts/tree/main/bitnami/redis#to-1600) for more information. Check [PostgreSQL Upgrading Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#to-1100) and [Redis Upgrading Notes](https://github.com/bitnami/charts/tree/main/bitnami/redis#to-1600) for more information.
The following values have been renamed: The following values have been renamed:
@@ -1133,7 +1178,7 @@ After that, you should be able to upgrade Kubeapps as always and the database wi
## License ## License
Copyright &copy; 2023 VMware, Inc. Copyright &copy; 2024 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.

View File

@@ -20,3 +20,5 @@
.idea/ .idea/
*.tmproj *.tmproj
.vscode/ .vscode/
# img folder
img/

View File

@@ -2,7 +2,7 @@ annotations:
category: Infrastructure category: Infrastructure
licenses: Apache-2.0 licenses: Apache-2.0
apiVersion: v2 apiVersion: v2
appVersion: 2.13.3 appVersion: 2.19.0
description: A Library Helm Chart for grouping common logic between bitnami charts. description: A Library Helm Chart for grouping common logic between bitnami charts.
This chart is not deployable by itself. This chart is not deployable by itself.
home: https://bitnami.com home: https://bitnami.com
@@ -20,4 +20,4 @@ name: common
sources: sources:
- https://github.com/bitnami/charts - https://github.com/bitnami/charts
type: library type: library
version: 2.13.3 version: 2.19.0

View File

@@ -24,14 +24,14 @@ data:
myvalue: "Hello World" myvalue: "Hello World"
``` ```
Looking to use our applications in production? Try [VMware Tanzu Application Catalog](https://bitnami.com/enterprise), the enterprise edition of Bitnami Application Catalog.
## Introduction ## Introduction
This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager. This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.dev/) for deployment and management of Helm Charts in clusters. Bitnami charts can be used with [Kubeapps](https://kubeapps.dev/) for deployment and management of Helm Charts in clusters.
Looking to use our applications in production? Try [VMware Application Catalog](https://bitnami.com/enterprise), the enterprise edition of Bitnami Application Catalog.
## Prerequisites ## Prerequisites
- Kubernetes 1.23+ - Kubernetes 1.23+
@@ -220,7 +220,7 @@ helm install test mychart --set path.to.value00="",path.to.value01=""
## License ## License
Copyright &copy; 2023 VMware, Inc. Copyright &copy; 2024 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.

View File

@@ -0,0 +1,39 @@
{{/*
Copyright VMware, Inc.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return true if the detected platform is Openshift
Usage:
{{- include "common.compatibility.isOpenshift" . -}}
*/}}
{{- define "common.compatibility.isOpenshift" -}}
{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1" -}}
{{- true -}}
{{- end -}}
{{- end -}}
{{/*
Render a compatible securityContext depending on the platform. By default it is maintained as it is. In other platforms like Openshift we remove default user/group values that do not work out of the box with the restricted-v1 SCC
Usage:
{{- include "common.compatibility.renderSecurityContext" (dict "secContext" .Values.containerSecurityContext "context" $) -}}
*/}}
{{- define "common.compatibility.renderSecurityContext" -}}
{{- $adaptedContext := .secContext -}}
{{- if .context.Values.global.compatibility -}}
{{- if .context.Values.global.compatibility.openshift -}}
{{- if or (eq .context.Values.global.compatibility.openshift.adaptSecurityContext "force") (and (eq .context.Values.global.compatibility.openshift.adaptSecurityContext "auto") (include "common.compatibility.isOpenshift" .context)) -}}
{{/* Remove incompatible user/group values that do not work in Openshift out of the box */}}
{{- $adaptedContext = omit $adaptedContext "fsGroup" "runAsUser" "runAsGroup" -}}
{{- if not .secContext.seLinuxOptions -}}
{{/* If it is an empty object, we remove it from the resulting context because it causes validation issues */}}
{{- $adaptedContext = omit $adaptedContext "seLinuxOptions" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- omit $adaptedContext "enabled" | toYaml -}}
{{- end -}}

View File

@@ -0,0 +1,50 @@
{{/*
Copyright VMware, Inc.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "common.resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "common.resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "1024Mi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "1024Mi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "1024Mi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "1024Mi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "1024Mi")
)
"xlarge" (dict
"requests" (dict "cpu" "2.0" "memory" "4096Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "1024Mi")
)
"2xlarge" (dict
"requests" (dict "cpu" "4.0" "memory" "8192Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "1024Mi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -78,6 +78,8 @@ Params:
- chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart. - chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart.
- context - Context - Required - Parent context. - context - Context - Required - Parent context.
- failOnNew - Boolean - Optional - Default to true. If set to false, skip errors adding new keys to existing secrets. - failOnNew - Boolean - Optional - Default to true. If set to false, skip errors adding new keys to existing secrets.
- skipB64enc - Boolean - Optional - Default to false. If set to true, no the secret will not be base64 encrypted.
- skipQuote - Boolean - Optional - Default to false. If set to true, no quotes will be added around the secret.
The order in which this function returns a secret password: The order in which this function returns a secret password:
1. Already existing 'Secret' resource 1. Already existing 'Secret' resource
(If a 'Secret' resource is found under the name provided to the 'secret' parameter to this function and that 'Secret' resource contains a key with the name passed as the 'key' parameter to this function then the value of this existing secret password will be returned) (If a 'Secret' resource is found under the name provided to the 'secret' parameter to this function and that 'Secret' resource contains a key with the name passed as the 'key' parameter to this function then the value of this existing secret password will be returned)
@@ -91,7 +93,6 @@ The order in which this function returns a secret password:
{{- $password := "" }} {{- $password := "" }}
{{- $subchart := "" }} {{- $subchart := "" }}
{{- $failOnNew := default true .failOnNew }}
{{- $chartName := default "" .chartName }} {{- $chartName := default "" .chartName }}
{{- $passwordLength := default 10 .length }} {{- $passwordLength := default 10 .length }}
{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }} {{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }}
@@ -99,12 +100,14 @@ The order in which this function returns a secret password:
{{- $secretData := (lookup "v1" "Secret" (include "common.names.namespace" .context) .secret).data }} {{- $secretData := (lookup "v1" "Secret" (include "common.names.namespace" .context) .secret).data }}
{{- if $secretData }} {{- if $secretData }}
{{- if hasKey $secretData .key }} {{- if hasKey $secretData .key }}
{{- $password = index $secretData .key | quote }} {{- $password = index $secretData .key | b64dec }}
{{- else if $failOnNew }} {{- else if not (eq .failOnNew false) }}
{{- printf "\nPASSWORDS ERROR: The secret \"%s\" does not contain the key \"%s\"\n" .secret .key | fail -}} {{- printf "\nPASSWORDS ERROR: The secret \"%s\" does not contain the key \"%s\"\n" .secret .key | fail -}}
{{- else if $providedPasswordValue }}
{{- $password = $providedPasswordValue | toString }}
{{- end -}} {{- end -}}
{{- else if $providedPasswordValue }} {{- else if $providedPasswordValue }}
{{- $password = $providedPasswordValue | toString | b64enc | quote }} {{- $password = $providedPasswordValue | toString }}
{{- else }} {{- else }}
{{- if .context.Values.enabled }} {{- if .context.Values.enabled }}
@@ -120,12 +123,19 @@ The order in which this function returns a secret password:
{{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }} {{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }}
{{- $password = randAscii $passwordLength }} {{- $password = randAscii $passwordLength }}
{{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }} {{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }}
{{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }} {{- $password = printf "%s%s" $subStr $password | toString | shuffle }}
{{- else }} {{- else }}
{{- $password = randAlphaNum $passwordLength | b64enc | quote }} {{- $password = randAlphaNum $passwordLength }}
{{- end }} {{- end }}
{{- end -}} {{- end -}}
{{- if not .skipB64enc }}
{{- $password = $password | b64enc }}
{{- end -}}
{{- if .skipQuote -}}
{{- printf "%s" $password -}} {{- printf "%s" $password -}}
{{- else -}}
{{- printf "%s" $password | quote -}}
{{- end -}}
{{- end -}} {{- end -}}
{{/* {{/*

View File

@@ -13,7 +13,70 @@ Usage:
{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }} {{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }}
WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment. WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/ +info https://docs.bitnami.com/tutorials/understand-rolling-tags-containers
{{- end }} {{- end }}
{{- end -}}
{{/*
Warning about not setting the resource object in all deployments.
Usage:
{{ include "common.warnings.resources" (dict "sections" (list "path1" "path2") context $) }}
Example:
{{- include "common.warnings.resources" (dict "sections" (list "csiProvider.provider" "server" "volumePermissions" "") "context" $) }}
The list in the example assumes that the following values exist:
- csiProvider.provider.resources
- server.resources
- volumePermissions.resources
- resources
*/}}
{{- define "common.warnings.resources" -}}
{{- $values := .context.Values -}}
{{- $printMessage := false -}}
{{ $affectedSections := list -}}
{{- range .sections -}}
{{- if eq . "" -}}
{{/* Case where the resources section is at the root (one main deployment in the chart) */}}
{{- if not (index $values "resources") -}}
{{- $affectedSections = append $affectedSections "resources" -}}
{{- $printMessage = true -}}
{{- end -}}
{{- else -}}
{{/* Case where the are multiple resources sections (more than one main deployment in the chart) */}}
{{- $keys := split "." . -}}
{{/* We iterate through the different levels until arriving to the resource section. Example: a.b.c.resources */}}
{{- $section := $values -}}
{{- range $keys -}}
{{- $section = index $section . -}}
{{- end -}}
{{- if not (index $section "resources") -}}
{{/* If the section has enabled=false or replicaCount=0, do not include it */}}
{{- if and (hasKey $section "enabled") -}}
{{- if index $section "enabled" -}}
{{/* enabled=true */}}
{{- $affectedSections = append $affectedSections (printf "%s.resources" .) -}}
{{- $printMessage = true -}}
{{- end -}}
{{- else if and (hasKey $section "replicaCount") -}}
{{/* We need a casting to int because number 0 is not treated as an int by default */}}
{{- if (gt (index $section "replicaCount" | int) 0) -}}
{{/* replicaCount > 0 */}}
{{- $affectedSections = append $affectedSections (printf "%s.resources" .) -}}
{{- $printMessage = true -}}
{{- end -}}
{{- else -}}
{{/* Default case, add it to the affected sections */}}
{{- $affectedSections = append $affectedSections (printf "%s.resources" .) -}}
{{- $printMessage = true -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if $printMessage }}
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
{{- range $affectedSections }}
- {{ . }}
{{- end }}
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
{{- end -}}
{{- end -}} {{- end -}}

View File

@@ -19,3 +19,5 @@
.project .project
.idea/ .idea/
*.tmproj *.tmproj
# img folder
img/

View File

@@ -1,6 +1,6 @@
dependencies: dependencies:
- name: common - name: common
repository: oci://registry-1.docker.io/bitnamicharts repository: oci://registry-1.docker.io/bitnamicharts
version: 2.13.3 version: 2.19.0
digest: sha256:9a971689db0c66ea95ac2e911c05014c2b96c6077c991131ff84f2982f88fb83 digest: sha256:ac559eb57710d8904e266424ee364cd686d7e24517871f0c5c67f7c4500c2bcc
generated: "2023-10-19T12:32:36.790999138Z" generated: "2024-03-08T15:56:40.04210215Z"

View File

@@ -1,17 +1,19 @@
annotations: annotations:
category: Database category: Database
images: | images: |
- name: kubectl
image: docker.io/bitnami/kubectl:1.29.2-debian-12-r3
- name: os-shell - name: os-shell
image: docker.io/bitnami/os-shell:11-debian-11-r91 image: docker.io/bitnami/os-shell:12-debian-12-r16
- name: redis-exporter
image: docker.io/bitnami/redis-exporter:1.55.0-debian-11-r2
- name: redis-sentinel
image: docker.io/bitnami/redis-sentinel:7.2.3-debian-11-r1
- name: redis - name: redis
image: docker.io/bitnami/redis:7.2.3-debian-11-r1 image: docker.io/bitnami/redis:7.2.4-debian-12-r9
- name: redis-exporter
image: docker.io/bitnami/redis-exporter:1.58.0-debian-12-r4
- name: redis-sentinel
image: docker.io/bitnami/redis-sentinel:7.2.4-debian-12-r7
licenses: Apache-2.0 licenses: Apache-2.0
apiVersion: v2 apiVersion: v2
appVersion: 7.2.3 appVersion: 7.2.4
dependencies: dependencies:
- name: common - name: common
repository: oci://registry-1.docker.io/bitnamicharts repository: oci://registry-1.docker.io/bitnamicharts
@@ -33,4 +35,4 @@ maintainers:
name: redis name: redis
sources: sources:
- https://github.com/bitnami/charts/tree/main/bitnami/redis - https://github.com/bitnami/charts/tree/main/bitnami/redis
version: 18.4.0 version: 18.19.2

View File

@@ -11,10 +11,10 @@ Disclaimer: Redis is a registered trademark of Redis Ltd. Any rights therein are
## TL;DR ## TL;DR
```console ```console
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/redis helm install my-release oci://registry-1.docker.io/bitnamicharts/redis
``` ```
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`. Looking to use Redisreg; in production? Try [VMware Tanzu Application Catalog](https://bitnami.com/enterprise), the enterprise edition of Bitnami Application Catalog.
## Introduction ## Introduction
@@ -37,8 +37,6 @@ The main features of each chart are the following:
| Single write point (single master) | Multiple write points (multiple masters) | | Single write point (single master) | Multiple write points (multiple masters) |
| ![Redis&reg; Topology](img/redis-topology.png) | ![Redis&reg; Cluster Topology](img/redis-cluster-topology.png) | | ![Redis&reg; Topology](img/redis-topology.png) | ![Redis&reg; Cluster Topology](img/redis-cluster-topology.png) |
Looking to use Redisreg; in production? Try [VMware Tanzu Application Catalog](https://bitnami.com/enterprise), the enterprise edition of Bitnami Application Catalog.
## Prerequisites ## Prerequisites
- Kubernetes 1.23+ - Kubernetes 1.23+
@@ -74,11 +72,12 @@ The command removes all the Kubernetes components associated with the chart and
### Global parameters ### Global parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------- | ------------------------------------------------------ | ----- | | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `global.imageRegistry` | Global Docker image registry | `""` | | `global.imageRegistry` | Global Docker image registry | `""` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` | | `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` |
| `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` | | `global.storageClass` | Global StorageClass for Persistent Volume(s) | `""` |
| `global.redis.password` | Global Redis&reg; password (overrides `auth.password`) | `""` | | `global.redis.password` | Global Redis&reg; password (overrides `auth.password`) | `""` |
| `global.compatibility.openshift.adaptSecurityContext` | Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | `disabled` |
### Common parameters ### Common parameters
@@ -87,6 +86,7 @@ The command removes all the Kubernetes components associated with the chart and
| `kubeVersion` | Override Kubernetes version | `""` | | `kubeVersion` | Override Kubernetes version | `""` |
| `nameOverride` | String to partially override common.names.fullname | `""` | | `nameOverride` | String to partially override common.names.fullname | `""` |
| `fullnameOverride` | String to fully override common.names.fullname | `""` | | `fullnameOverride` | String to fully override common.names.fullname | `""` |
| `namespaceOverride` | String to fully override common.names.namespace | `""` |
| `commonLabels` | Labels to add to all deployed objects | `{}` | | `commonLabels` | Labels to add to all deployed objects | `{}` |
| `commonAnnotations` | Annotations to add to all deployed objects | `{}` | | `commonAnnotations` | Annotations to add to all deployed objects | `{}` |
| `secretAnnotations` | Annotations to add to secret | `{}` | | `secretAnnotations` | Annotations to add to secret | `{}` |
@@ -121,13 +121,14 @@ The command removes all the Kubernetes components associated with the chart and
| `auth.existingSecret` | The name of an existing secret with Redis&reg; credentials | `""` | | `auth.existingSecret` | The name of an existing secret with Redis&reg; credentials | `""` |
| `auth.existingSecretPasswordKey` | Password key to be retrieved from existing secret | `""` | | `auth.existingSecretPasswordKey` | Password key to be retrieved from existing secret | `""` |
| `auth.usePasswordFiles` | Mount credentials as files instead of using an environment variable | `false` | | `auth.usePasswordFiles` | Mount credentials as files instead of using an environment variable | `false` |
| `auth.usePasswordFileFromSecret` | Mount password file from secret | `true` |
| `commonConfiguration` | Common configuration to be added into the ConfigMap | `""` | | `commonConfiguration` | Common configuration to be added into the ConfigMap | `""` |
| `existingConfigmap` | The name of an existing ConfigMap with your custom configuration for Redis&reg; nodes | `""` | | `existingConfigmap` | The name of an existing ConfigMap with your custom configuration for Redis&reg; nodes | `""` |
### Redis&reg; master configuration parameters ### Redis&reg; master configuration parameters
| Name | Description | Value | | Name | Description | Value |
| ---------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | ------------------------ | | ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------ |
| `master.count` | Number of Redis&reg; master instances to deploy (experimental, requires additional configuration) | `1` | | `master.count` | Number of Redis&reg; master instances to deploy (experimental, requires additional configuration) | `1` |
| `master.configuration` | Configuration for Redis&reg; master nodes | `""` | | `master.configuration` | Configuration for Redis&reg; master nodes | `""` |
| `master.disableCommands` | Array with Redis&reg; commands to disable on master nodes | `["FLUSHDB","FLUSHALL"]` | | `master.disableCommands` | Array with Redis&reg; commands to disable on master nodes | `["FLUSHDB","FLUSHALL"]` |
@@ -161,15 +162,20 @@ The command removes all the Kubernetes components associated with the chart and
| `master.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` | | `master.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `master.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` | | `master.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `master.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` | | `master.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `master.resources.limits` | The resources limits for the Redis&reg; master containers | `{}` | | `master.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if master.resources is set (master.resources is recommended for production). | `none` |
| `master.resources.requests` | The requested resources for the Redis&reg; master containers | `{}` | | `master.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `master.podSecurityContext.enabled` | Enabled Redis&reg; master pods' Security Context | `true` | | `master.podSecurityContext.enabled` | Enabled Redis&reg; master pods' Security Context | `true` |
| `master.podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `master.podSecurityContext.sysctls` | Set kernel settings using the sysctl interface | `[]` |
| `master.podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `master.podSecurityContext.fsGroup` | Set Redis&reg; master pod's Security Context fsGroup | `1001` | | `master.podSecurityContext.fsGroup` | Set Redis&reg; master pod's Security Context fsGroup | `1001` |
| `master.containerSecurityContext.enabled` | Enabled Redis&reg; master containers' Security Context | `true` | | `master.containerSecurityContext.enabled` | Enabled Redis&reg; master containers' Security Context | `true` |
| `master.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `master.containerSecurityContext.runAsUser` | Set Redis&reg; master containers' Security Context runAsUser | `1001` | | `master.containerSecurityContext.runAsUser` | Set Redis&reg; master containers' Security Context runAsUser | `1001` |
| `master.containerSecurityContext.runAsGroup` | Set Redis&reg; master containers' Security Context runAsGroup | `0` | | `master.containerSecurityContext.runAsGroup` | Set Redis&reg; master containers' Security Context runAsGroup | `0` |
| `master.containerSecurityContext.runAsNonRoot` | Set Redis&reg; master containers' Security Context runAsNonRoot | `true` | | `master.containerSecurityContext.runAsNonRoot` | Set Redis&reg; master containers' Security Context runAsNonRoot | `true` |
| `master.containerSecurityContext.allowPrivilegeEscalation` | Is it possible to escalate Redis&reg; pod(s) privileges | `false` | | `master.containerSecurityContext.allowPrivilegeEscalation` | Is it possible to escalate Redis&reg; pod(s) privileges | `false` |
| `master.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context read-only root filesystem | `false` |
| `master.containerSecurityContext.seccompProfile.type` | Set Redis&reg; master containers' Security Context seccompProfile | `RuntimeDefault` | | `master.containerSecurityContext.seccompProfile.type` | Set Redis&reg; master containers' Security Context seccompProfile | `RuntimeDefault` |
| `master.containerSecurityContext.capabilities.drop` | Set Redis&reg; master containers' Security Context capabilities to drop | `["ALL"]` | | `master.containerSecurityContext.capabilities.drop` | Set Redis&reg; master containers' Security Context capabilities to drop | `["ALL"]` |
| `master.kind` | Use either Deployment, StatefulSet (default) or DaemonSet | `StatefulSet` | | `master.kind` | Use either Deployment, StatefulSet (default) or DaemonSet | `StatefulSet` |
@@ -177,6 +183,7 @@ The command removes all the Kubernetes components associated with the chart and
| `master.updateStrategy.type` | Redis&reg; master statefulset strategy type | `RollingUpdate` | | `master.updateStrategy.type` | Redis&reg; master statefulset strategy type | `RollingUpdate` |
| `master.minReadySeconds` | How many seconds a pod needs to be ready before killing the next, during update | `0` | | `master.minReadySeconds` | How many seconds a pod needs to be ready before killing the next, during update | `0` |
| `master.priorityClassName` | Redis&reg; master pods' priorityClassName | `""` | | `master.priorityClassName` | Redis&reg; master pods' priorityClassName | `""` |
| `master.automountServiceAccountToken` | Mount Service Account token in pod | `false` |
| `master.hostAliases` | Redis&reg; master pods host aliases | `[]` | | `master.hostAliases` | Redis&reg; master pods host aliases | `[]` |
| `master.podLabels` | Extra labels for Redis&reg; master pods | `{}` | | `master.podLabels` | Extra labels for Redis&reg; master pods | `{}` |
| `master.podAnnotations` | Annotations for Redis&reg; master pods | `{}` | | `master.podAnnotations` | Annotations for Redis&reg; master pods | `{}` |
@@ -222,21 +229,22 @@ The command removes all the Kubernetes components associated with the chart and
| `master.service.internalTrafficPolicy` | Redis&reg; master service internal traffic policy (requires Kubernetes v1.22 or greater to be usable) | `Cluster` | | `master.service.internalTrafficPolicy` | Redis&reg; master service internal traffic policy (requires Kubernetes v1.22 or greater to be usable) | `Cluster` |
| `master.service.clusterIP` | Redis&reg; master service Cluster IP | `""` | | `master.service.clusterIP` | Redis&reg; master service Cluster IP | `""` |
| `master.service.loadBalancerIP` | Redis&reg; master service Load Balancer IP | `""` | | `master.service.loadBalancerIP` | Redis&reg; master service Load Balancer IP | `""` |
| `master.service.loadBalancerClass` | master service Load Balancer class if service type is `LoadBalancer` (optional, cloud specific) | `""` |
| `master.service.loadBalancerSourceRanges` | Redis&reg; master service Load Balancer sources | `[]` | | `master.service.loadBalancerSourceRanges` | Redis&reg; master service Load Balancer sources | `[]` |
| `master.service.externalIPs` | Redis&reg; master service External IPs | `[]` | | `master.service.externalIPs` | Redis&reg; master service External IPs | `[]` |
| `master.service.annotations` | Additional custom annotations for Redis&reg; master service | `{}` | | `master.service.annotations` | Additional custom annotations for Redis&reg; master service | `{}` |
| `master.service.sessionAffinity` | Session Affinity for Kubernetes service, can be "None" or "ClientIP" | `None` | | `master.service.sessionAffinity` | Session Affinity for Kubernetes service, can be "None" or "ClientIP" | `None` |
| `master.service.sessionAffinityConfig` | Additional settings for the sessionAffinity | `{}` | | `master.service.sessionAffinityConfig` | Additional settings for the sessionAffinity | `{}` |
| `master.terminationGracePeriodSeconds` | Integer setting the termination grace period for the redis-master pods | `30` | | `master.terminationGracePeriodSeconds` | Integer setting the termination grace period for the redis-master pods | `30` |
| `master.serviceAccount.create` | Specifies whether a ServiceAccount should be created | `false` | | `master.serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` |
| `master.serviceAccount.name` | The name of the ServiceAccount to use. | `""` | | `master.serviceAccount.name` | The name of the ServiceAccount to use. | `""` |
| `master.serviceAccount.automountServiceAccountToken` | Whether to auto mount the service account token | `true` | | `master.serviceAccount.automountServiceAccountToken` | Whether to auto mount the service account token | `false` |
| `master.serviceAccount.annotations` | Additional custom annotations for the ServiceAccount | `{}` | | `master.serviceAccount.annotations` | Additional custom annotations for the ServiceAccount | `{}` |
### Redis&reg; replicas configuration parameters ### Redis&reg; replicas configuration parameters
| Name | Description | Value | | Name | Description | Value |
| ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | ------------------------ | | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
| `replica.kind` | Use either DaemonSet or StatefulSet (default) | `StatefulSet` | | `replica.kind` | Use either DaemonSet or StatefulSet (default) | `StatefulSet` |
| `replica.replicaCount` | Number of Redis&reg; replicas to deploy | `3` | | `replica.replicaCount` | Number of Redis&reg; replicas to deploy | `3` |
| `replica.configuration` | Configuration for Redis&reg; replicas nodes | `""` | | `replica.configuration` | Configuration for Redis&reg; replicas nodes | `""` |
@@ -274,15 +282,20 @@ The command removes all the Kubernetes components associated with the chart and
| `replica.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` | | `replica.customStartupProbe` | Custom startupProbe that overrides the default one | `{}` |
| `replica.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` | | `replica.customLivenessProbe` | Custom livenessProbe that overrides the default one | `{}` |
| `replica.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` | | `replica.customReadinessProbe` | Custom readinessProbe that overrides the default one | `{}` |
| `replica.resources.limits` | The resources limits for the Redis&reg; replicas containers | `{}` | | `replica.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if replica.resources is set (replica.resources is recommended for production). | `none` |
| `replica.resources.requests` | The requested resources for the Redis&reg; replicas containers | `{}` | | `replica.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `replica.podSecurityContext.enabled` | Enabled Redis&reg; replicas pods' Security Context | `true` | | `replica.podSecurityContext.enabled` | Enabled Redis&reg; replicas pods' Security Context | `true` |
| `replica.podSecurityContext.fsGroupChangePolicy` | Set filesystem group change policy | `Always` |
| `replica.podSecurityContext.sysctls` | Set kernel settings using the sysctl interface | `[]` |
| `replica.podSecurityContext.supplementalGroups` | Set filesystem extra groups | `[]` |
| `replica.podSecurityContext.fsGroup` | Set Redis&reg; replicas pod's Security Context fsGroup | `1001` | | `replica.podSecurityContext.fsGroup` | Set Redis&reg; replicas pod's Security Context fsGroup | `1001` |
| `replica.containerSecurityContext.enabled` | Enabled Redis&reg; replicas containers' Security Context | `true` | | `replica.containerSecurityContext.enabled` | Enabled Redis&reg; replicas containers' Security Context | `true` |
| `replica.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `replica.containerSecurityContext.runAsUser` | Set Redis&reg; replicas containers' Security Context runAsUser | `1001` | | `replica.containerSecurityContext.runAsUser` | Set Redis&reg; replicas containers' Security Context runAsUser | `1001` |
| `replica.containerSecurityContext.runAsGroup` | Set Redis&reg; replicas containers' Security Context runAsGroup | `0` | | `replica.containerSecurityContext.runAsGroup` | Set Redis&reg; replicas containers' Security Context runAsGroup | `0` |
| `replica.containerSecurityContext.runAsNonRoot` | Set Redis&reg; replicas containers' Security Context runAsNonRoot | `true` | | `replica.containerSecurityContext.runAsNonRoot` | Set Redis&reg; replicas containers' Security Context runAsNonRoot | `true` |
| `replica.containerSecurityContext.allowPrivilegeEscalation` | Set Redis&reg; replicas pod's Security Context allowPrivilegeEscalation | `false` | | `replica.containerSecurityContext.allowPrivilegeEscalation` | Set Redis&reg; replicas pod's Security Context allowPrivilegeEscalation | `false` |
| `replica.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context read-only root filesystem | `false` |
| `replica.containerSecurityContext.seccompProfile.type` | Set Redis&reg; replicas containers' Security Context seccompProfile | `RuntimeDefault` | | `replica.containerSecurityContext.seccompProfile.type` | Set Redis&reg; replicas containers' Security Context seccompProfile | `RuntimeDefault` |
| `replica.containerSecurityContext.capabilities.drop` | Set Redis&reg; replicas containers' Security Context capabilities to drop | `["ALL"]` | | `replica.containerSecurityContext.capabilities.drop` | Set Redis&reg; replicas containers' Security Context capabilities to drop | `["ALL"]` |
| `replica.schedulerName` | Alternate scheduler for Redis&reg; replicas pods | `""` | | `replica.schedulerName` | Alternate scheduler for Redis&reg; replicas pods | `""` |
@@ -290,6 +303,7 @@ The command removes all the Kubernetes components associated with the chart and
| `replica.minReadySeconds` | How many seconds a pod needs to be ready before killing the next, during update | `0` | | `replica.minReadySeconds` | How many seconds a pod needs to be ready before killing the next, during update | `0` |
| `replica.priorityClassName` | Redis&reg; replicas pods' priorityClassName | `""` | | `replica.priorityClassName` | Redis&reg; replicas pods' priorityClassName | `""` |
| `replica.podManagementPolicy` | podManagementPolicy to manage scaling operation of %%MAIN_CONTAINER_NAME%% pods | `""` | | `replica.podManagementPolicy` | podManagementPolicy to manage scaling operation of %%MAIN_CONTAINER_NAME%% pods | `""` |
| `replica.automountServiceAccountToken` | Mount Service Account token in pod | `false` |
| `replica.hostAliases` | Redis&reg; replicas pods host aliases | `[]` | | `replica.hostAliases` | Redis&reg; replicas pods host aliases | `[]` |
| `replica.podLabels` | Extra labels for Redis&reg; replicas pods | `{}` | | `replica.podLabels` | Extra labels for Redis&reg; replicas pods | `{}` |
| `replica.podAnnotations` | Annotations for Redis&reg; replicas pods | `{}` | | `replica.podAnnotations` | Annotations for Redis&reg; replicas pods | `{}` |
@@ -335,6 +349,7 @@ The command removes all the Kubernetes components associated with the chart and
| `replica.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` | | `replica.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `replica.service.clusterIP` | Redis&reg; replicas service Cluster IP | `""` | | `replica.service.clusterIP` | Redis&reg; replicas service Cluster IP | `""` |
| `replica.service.loadBalancerIP` | Redis&reg; replicas service Load Balancer IP | `""` | | `replica.service.loadBalancerIP` | Redis&reg; replicas service Load Balancer IP | `""` |
| `replica.service.loadBalancerClass` | replicas service Load Balancer class if service type is `LoadBalancer` (optional, cloud specific) | `""` |
| `replica.service.loadBalancerSourceRanges` | Redis&reg; replicas service Load Balancer sources | `[]` | | `replica.service.loadBalancerSourceRanges` | Redis&reg; replicas service Load Balancer sources | `[]` |
| `replica.service.annotations` | Additional custom annotations for Redis&reg; replicas service | `{}` | | `replica.service.annotations` | Additional custom annotations for Redis&reg; replicas service | `{}` |
| `replica.service.sessionAffinity` | Session Affinity for Kubernetes service, can be "None" or "ClientIP" | `None` | | `replica.service.sessionAffinity` | Session Affinity for Kubernetes service, can be "None" or "ClientIP" | `None` |
@@ -345,15 +360,15 @@ The command removes all the Kubernetes components associated with the chart and
| `replica.autoscaling.maxReplicas` | Maximum replicas for the pod autoscaling | `11` | | `replica.autoscaling.maxReplicas` | Maximum replicas for the pod autoscaling | `11` |
| `replica.autoscaling.targetCPU` | Percentage of CPU to consider when autoscaling | `""` | | `replica.autoscaling.targetCPU` | Percentage of CPU to consider when autoscaling | `""` |
| `replica.autoscaling.targetMemory` | Percentage of Memory to consider when autoscaling | `""` | | `replica.autoscaling.targetMemory` | Percentage of Memory to consider when autoscaling | `""` |
| `replica.serviceAccount.create` | Specifies whether a ServiceAccount should be created | `false` | | `replica.serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` |
| `replica.serviceAccount.name` | The name of the ServiceAccount to use. | `""` | | `replica.serviceAccount.name` | The name of the ServiceAccount to use. | `""` |
| `replica.serviceAccount.automountServiceAccountToken` | Whether to auto mount the service account token | `true` | | `replica.serviceAccount.automountServiceAccountToken` | Whether to auto mount the service account token | `false` |
| `replica.serviceAccount.annotations` | Additional custom annotations for the ServiceAccount | `{}` | | `replica.serviceAccount.annotations` | Additional custom annotations for the ServiceAccount | `{}` |
### Redis&reg; Sentinel configuration parameters ### Redis&reg; Sentinel configuration parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------- | | ------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------- |
| `sentinel.enabled` | Use Redis&reg; Sentinel on Redis&reg; pods. | `false` | | `sentinel.enabled` | Use Redis&reg; Sentinel on Redis&reg; pods. | `false` |
| `sentinel.image.registry` | Redis&reg; Sentinel image registry | `REGISTRY_NAME` | | `sentinel.image.registry` | Redis&reg; Sentinel image registry | `REGISTRY_NAME` |
| `sentinel.image.repository` | Redis&reg; Sentinel image repository | `REPOSITORY_NAME/redis-sentinel` | | `sentinel.image.repository` | Redis&reg; Sentinel image repository | `REPOSITORY_NAME/redis-sentinel` |
@@ -416,12 +431,14 @@ The command removes all the Kubernetes components associated with the chart and
| `sentinel.persistentVolumeClaimRetentionPolicy.enabled` | Controls if and how PVCs are deleted during the lifecycle of a StatefulSet | `false` | | `sentinel.persistentVolumeClaimRetentionPolicy.enabled` | Controls if and how PVCs are deleted during the lifecycle of a StatefulSet | `false` |
| `sentinel.persistentVolumeClaimRetentionPolicy.whenScaled` | Volume retention behavior when the replica count of the StatefulSet is reduced | `Retain` | | `sentinel.persistentVolumeClaimRetentionPolicy.whenScaled` | Volume retention behavior when the replica count of the StatefulSet is reduced | `Retain` |
| `sentinel.persistentVolumeClaimRetentionPolicy.whenDeleted` | Volume retention behavior that applies when the StatefulSet is deleted | `Retain` | | `sentinel.persistentVolumeClaimRetentionPolicy.whenDeleted` | Volume retention behavior that applies when the StatefulSet is deleted | `Retain` |
| `sentinel.resources.limits` | The resources limits for the Redis&reg; Sentinel containers | `{}` | | `sentinel.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if sentinel.resources is set (sentinel.resources is recommended for production). | `none` |
| `sentinel.resources.requests` | The requested resources for the Redis&reg; Sentinel containers | `{}` | | `sentinel.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `sentinel.containerSecurityContext.enabled` | Enabled Redis&reg; Sentinel containers' Security Context | `true` | | `sentinel.containerSecurityContext.enabled` | Enabled Redis&reg; Sentinel containers' Security Context | `true` |
| `sentinel.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `sentinel.containerSecurityContext.runAsUser` | Set Redis&reg; Sentinel containers' Security Context runAsUser | `1001` | | `sentinel.containerSecurityContext.runAsUser` | Set Redis&reg; Sentinel containers' Security Context runAsUser | `1001` |
| `sentinel.containerSecurityContext.runAsGroup` | Set Redis&reg; Sentinel containers' Security Context runAsGroup | `0` | | `sentinel.containerSecurityContext.runAsGroup` | Set Redis&reg; Sentinel containers' Security Context runAsGroup | `0` |
| `sentinel.containerSecurityContext.runAsNonRoot` | Set Redis&reg; Sentinel containers' Security Context runAsNonRoot | `true` | | `sentinel.containerSecurityContext.runAsNonRoot` | Set Redis&reg; Sentinel containers' Security Context runAsNonRoot | `true` |
| `sentinel.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context read-only root filesystem | `false` |
| `sentinel.containerSecurityContext.allowPrivilegeEscalation` | Set Redis&reg; Sentinel containers' Security Context allowPrivilegeEscalation | `false` | | `sentinel.containerSecurityContext.allowPrivilegeEscalation` | Set Redis&reg; Sentinel containers' Security Context allowPrivilegeEscalation | `false` |
| `sentinel.containerSecurityContext.seccompProfile.type` | Set Redis&reg; Sentinel containers' Security Context seccompProfile | `RuntimeDefault` | | `sentinel.containerSecurityContext.seccompProfile.type` | Set Redis&reg; Sentinel containers' Security Context seccompProfile | `RuntimeDefault` |
| `sentinel.containerSecurityContext.capabilities.drop` | Set Redis&reg; Sentinel containers' Security Context capabilities to drop | `["ALL"]` | | `sentinel.containerSecurityContext.capabilities.drop` | Set Redis&reg; Sentinel containers' Security Context capabilities to drop | `["ALL"]` |
@@ -436,7 +453,9 @@ The command removes all the Kubernetes components associated with the chart and
| `sentinel.service.externalTrafficPolicy` | Redis&reg; Sentinel service external traffic policy | `Cluster` | | `sentinel.service.externalTrafficPolicy` | Redis&reg; Sentinel service external traffic policy | `Cluster` |
| `sentinel.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` | | `sentinel.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `sentinel.service.clusterIP` | Redis&reg; Sentinel service Cluster IP | `""` | | `sentinel.service.clusterIP` | Redis&reg; Sentinel service Cluster IP | `""` |
| `sentinel.service.createMaster` | Enable master service pointing to the current master (experimental) | `false` |
| `sentinel.service.loadBalancerIP` | Redis&reg; Sentinel service Load Balancer IP | `""` | | `sentinel.service.loadBalancerIP` | Redis&reg; Sentinel service Load Balancer IP | `""` |
| `sentinel.service.loadBalancerClass` | sentinel service Load Balancer class if service type is `LoadBalancer` (optional, cloud specific) | `""` |
| `sentinel.service.loadBalancerSourceRanges` | Redis&reg; Sentinel service Load Balancer sources | `[]` | | `sentinel.service.loadBalancerSourceRanges` | Redis&reg; Sentinel service Load Balancer sources | `[]` |
| `sentinel.service.annotations` | Additional custom annotations for Redis&reg; Sentinel service | `{}` | | `sentinel.service.annotations` | Additional custom annotations for Redis&reg; Sentinel service | `{}` |
| `sentinel.service.sessionAffinity` | Session Affinity for Kubernetes service, can be "None" or "ClientIP" | `None` | | `sentinel.service.sessionAffinity` | Session Affinity for Kubernetes service, can be "None" or "ClientIP" | `None` |
@@ -449,8 +468,9 @@ The command removes all the Kubernetes components associated with the chart and
| Name | Description | Value | | Name | Description | Value |
| ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `serviceBindings.enabled` | Create secret for service binding (Experimental) | `false` | | `serviceBindings.enabled` | Create secret for service binding (Experimental) | `false` |
| `networkPolicy.enabled` | Enable creation of NetworkPolicy resources | `false` | | `networkPolicy.enabled` | Enable creation of NetworkPolicy resources | `true` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` | | `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `networkPolicy.allowExternalEgress` | Allow the pod to access any range of port and all destinations. | `true` |
| `networkPolicy.extraIngress` | Add extra ingress rules to the NetworkPolicy | `[]` | | `networkPolicy.extraIngress` | Add extra ingress rules to the NetworkPolicy | `[]` |
| `networkPolicy.extraEgress` | Add extra egress rules to the NetworkPolicy | `[]` | | `networkPolicy.extraEgress` | Add extra egress rules to the NetworkPolicy | `[]` |
| `networkPolicy.ingressNSMatchLabels` | Labels to match to allow traffic from other namespaces | `{}` | | `networkPolicy.ingressNSMatchLabels` | Labels to match to allow traffic from other namespaces | `{}` |
@@ -464,7 +484,7 @@ The command removes all the Kubernetes components associated with the chart and
| `rbac.rules` | Custom RBAC rules to set | `[]` | | `rbac.rules` | Custom RBAC rules to set | `[]` |
| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` | | `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` |
| `serviceAccount.name` | The name of the ServiceAccount to use. | `""` | | `serviceAccount.name` | The name of the ServiceAccount to use. | `""` |
| `serviceAccount.automountServiceAccountToken` | Whether to auto mount the service account token | `true` | | `serviceAccount.automountServiceAccountToken` | Whether to auto mount the service account token | `false` |
| `serviceAccount.annotations` | Additional custom annotations for the ServiceAccount | `{}` | | `serviceAccount.annotations` | Additional custom annotations for the ServiceAccount | `{}` |
| `pdb.create` | Specifies whether a PodDisruptionBudget should be created | `false` | | `pdb.create` | Specifies whether a PodDisruptionBudget should be created | `false` |
| `pdb.minAvailable` | Min number of pods that must still be available after the eviction | `1` | | `pdb.minAvailable` | Min number of pods that must still be available after the eviction | `1` |
@@ -482,13 +502,14 @@ The command removes all the Kubernetes components associated with the chart and
### Metrics Parameters ### Metrics Parameters
| Name | Description | Value | | Name | Description | Value |
| ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | -------------------------------- | | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------- |
| `metrics.enabled` | Start a sidecar prometheus exporter to expose Redis&reg; metrics | `false` | | `metrics.enabled` | Start a sidecar prometheus exporter to expose Redis&reg; metrics | `false` |
| `metrics.image.registry` | Redis&reg; Exporter image registry | `REGISTRY_NAME` | | `metrics.image.registry` | Redis&reg; Exporter image registry | `REGISTRY_NAME` |
| `metrics.image.repository` | Redis&reg; Exporter image repository | `REPOSITORY_NAME/redis-exporter` | | `metrics.image.repository` | Redis&reg; Exporter image repository | `REPOSITORY_NAME/redis-exporter` |
| `metrics.image.digest` | Redis&reg; Exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` | | `metrics.image.digest` | Redis&reg; Exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
| `metrics.image.pullPolicy` | Redis&reg; Exporter image pull policy | `IfNotPresent` | | `metrics.image.pullPolicy` | Redis&reg; Exporter image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Redis&reg; Exporter image pull secrets | `[]` | | `metrics.image.pullSecrets` | Redis&reg; Exporter image pull secrets | `[]` |
| `metrics.containerPorts.http` | Metrics HTTP container port | `9121` |
| `metrics.startupProbe.enabled` | Enable startupProbe on Redis&reg; replicas nodes | `false` | | `metrics.startupProbe.enabled` | Enable startupProbe on Redis&reg; replicas nodes | `false` |
| `metrics.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `10` | | `metrics.startupProbe.initialDelaySeconds` | Initial delay seconds for startupProbe | `10` |
| `metrics.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` | | `metrics.startupProbe.periodSeconds` | Period seconds for startupProbe | `10` |
@@ -515,26 +536,31 @@ The command removes all the Kubernetes components associated with the chart and
| `metrics.extraArgs` | Extra arguments for Redis&reg; exporter, for example: | `{}` | | `metrics.extraArgs` | Extra arguments for Redis&reg; exporter, for example: | `{}` |
| `metrics.extraEnvVars` | Array with extra environment variables to add to Redis&reg; exporter | `[]` | | `metrics.extraEnvVars` | Array with extra environment variables to add to Redis&reg; exporter | `[]` |
| `metrics.containerSecurityContext.enabled` | Enabled Redis&reg; exporter containers' Security Context | `true` | | `metrics.containerSecurityContext.enabled` | Enabled Redis&reg; exporter containers' Security Context | `true` |
| `metrics.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `metrics.containerSecurityContext.runAsUser` | Set Redis&reg; exporter containers' Security Context runAsUser | `1001` | | `metrics.containerSecurityContext.runAsUser` | Set Redis&reg; exporter containers' Security Context runAsUser | `1001` |
| `metrics.containerSecurityContext.runAsGroup` | Set Redis&reg; exporter containers' Security Context runAsGroup | `0` | | `metrics.containerSecurityContext.runAsGroup` | Set Redis&reg; exporter containers' Security Context runAsGroup | `0` |
| `metrics.containerSecurityContext.runAsNonRoot` | Set Redis&reg; exporter containers' Security Context runAsNonRoot | `true` | | `metrics.containerSecurityContext.runAsNonRoot` | Set Redis&reg; exporter containers' Security Context runAsNonRoot | `true` |
| `metrics.containerSecurityContext.allowPrivilegeEscalation` | Set Redis&reg; exporter containers' Security Context allowPrivilegeEscalation | `false` | | `metrics.containerSecurityContext.allowPrivilegeEscalation` | Set Redis&reg; exporter containers' Security Context allowPrivilegeEscalation | `false` |
| `metrics.containerSecurityContext.readOnlyRootFilesystem` | Set container's Security Context read-only root filesystem | `false` |
| `metrics.containerSecurityContext.seccompProfile.type` | Set Redis&reg; exporter containers' Security Context seccompProfile | `RuntimeDefault` | | `metrics.containerSecurityContext.seccompProfile.type` | Set Redis&reg; exporter containers' Security Context seccompProfile | `RuntimeDefault` |
| `metrics.containerSecurityContext.capabilities.drop` | Set Redis&reg; exporter containers' Security Context capabilities to drop | `["ALL"]` | | `metrics.containerSecurityContext.capabilities.drop` | Set Redis&reg; exporter containers' Security Context capabilities to drop | `["ALL"]` |
| `metrics.extraVolumes` | Optionally specify extra list of additional volumes for the Redis&reg; metrics sidecar | `[]` | | `metrics.extraVolumes` | Optionally specify extra list of additional volumes for the Redis&reg; metrics sidecar | `[]` |
| `metrics.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Redis&reg; metrics sidecar | `[]` | | `metrics.extraVolumeMounts` | Optionally specify extra list of additional volumeMounts for the Redis&reg; metrics sidecar | `[]` |
| `metrics.resources.limits` | The resources limits for the Redis&reg; exporter container | `{}` | | `metrics.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production). | `none` |
| `metrics.resources.requests` | The requested resources for the Redis&reg; exporter container | `{}` | | `metrics.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `metrics.podLabels` | Extra labels for Redis&reg; exporter pods | `{}` | | `metrics.podLabels` | Extra labels for Redis&reg; exporter pods | `{}` |
| `metrics.podAnnotations` | Annotations for Redis&reg; exporter pods | `{}` | | `metrics.podAnnotations` | Annotations for Redis&reg; exporter pods | `{}` |
| `metrics.service.enabled` | Create Service resource(s) for scraping metrics using PrometheusOperator ServiceMonitor, can be disabled when using a PodMonitor | `true` |
| `metrics.service.type` | Redis&reg; exporter service type | `ClusterIP` | | `metrics.service.type` | Redis&reg; exporter service type | `ClusterIP` |
| `metrics.service.port` | Redis&reg; exporter service port | `9121` | | `metrics.service.ports.http` | Redis&reg; exporter service port | `9121` |
| `metrics.service.externalTrafficPolicy` | Redis&reg; exporter service external traffic policy | `Cluster` | | `metrics.service.externalTrafficPolicy` | Redis&reg; exporter service external traffic policy | `Cluster` |
| `metrics.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` | | `metrics.service.extraPorts` | Extra ports to expose (normally used with the `sidecar` value) | `[]` |
| `metrics.service.loadBalancerIP` | Redis&reg; exporter service Load Balancer IP | `""` | | `metrics.service.loadBalancerIP` | Redis&reg; exporter service Load Balancer IP | `""` |
| `metrics.service.loadBalancerClass` | exporter service Load Balancer class if service type is `LoadBalancer` (optional, cloud specific) | `""` |
| `metrics.service.loadBalancerSourceRanges` | Redis&reg; exporter service Load Balancer sources | `[]` | | `metrics.service.loadBalancerSourceRanges` | Redis&reg; exporter service Load Balancer sources | `[]` |
| `metrics.service.annotations` | Additional custom annotations for Redis&reg; exporter service | `{}` | | `metrics.service.annotations` | Additional custom annotations for Redis&reg; exporter service | `{}` |
| `metrics.service.clusterIP` | Redis&reg; exporter service Cluster IP | `""` | | `metrics.service.clusterIP` | Redis&reg; exporter service Cluster IP | `""` |
| `metrics.serviceMonitor.port` | the service port to scrape metrics from | `http-metrics` |
| `metrics.serviceMonitor.enabled` | Create ServiceMonitor resource(s) for scraping metrics using PrometheusOperator | `false` | | `metrics.serviceMonitor.enabled` | Create ServiceMonitor resource(s) for scraping metrics using PrometheusOperator | `false` |
| `metrics.serviceMonitor.namespace` | The namespace in which the ServiceMonitor will be created | `""` | | `metrics.serviceMonitor.namespace` | The namespace in which the ServiceMonitor will be created | `""` |
| `metrics.serviceMonitor.interval` | The interval at which metrics should be scraped | `30s` | | `metrics.serviceMonitor.interval` | The interval at which metrics should be scraped | `30s` |
@@ -546,6 +572,8 @@ The command removes all the Kubernetes components associated with the chart and
| `metrics.serviceMonitor.podTargetLabels` | Labels from the Kubernetes pod to be transferred to the created metrics | `[]` | | `metrics.serviceMonitor.podTargetLabels` | Labels from the Kubernetes pod to be transferred to the created metrics | `[]` |
| `metrics.serviceMonitor.sampleLimit` | Limit of how many samples should be scraped from every Pod | `false` | | `metrics.serviceMonitor.sampleLimit` | Limit of how many samples should be scraped from every Pod | `false` |
| `metrics.serviceMonitor.targetLimit` | Limit of how many targets should be scraped | `false` | | `metrics.serviceMonitor.targetLimit` | Limit of how many targets should be scraped | `false` |
| `metrics.serviceMonitor.additionalEndpoints` | Additional endpoints to scrape (e.g sentinel) | `[]` |
| `metrics.podMonitor.port` | the pod port to scrape metrics from | `metrics` |
| `metrics.podMonitor.enabled` | Create PodMonitor resource(s) for scraping metrics using PrometheusOperator | `false` | | `metrics.podMonitor.enabled` | Create PodMonitor resource(s) for scraping metrics using PrometheusOperator | `false` |
| `metrics.podMonitor.namespace` | The namespace in which the PodMonitor will be created | `""` | | `metrics.podMonitor.namespace` | The namespace in which the PodMonitor will be created | `""` |
| `metrics.podMonitor.interval` | The interval at which metrics should be scraped | `30s` | | `metrics.podMonitor.interval` | The interval at which metrics should be scraped | `30s` |
@@ -557,6 +585,7 @@ The command removes all the Kubernetes components associated with the chart and
| `metrics.podMonitor.podTargetLabels` | Labels from the Kubernetes pod to be transferred to the created metrics | `[]` | | `metrics.podMonitor.podTargetLabels` | Labels from the Kubernetes pod to be transferred to the created metrics | `[]` |
| `metrics.podMonitor.sampleLimit` | Limit of how many samples should be scraped from every Pod | `false` | | `metrics.podMonitor.sampleLimit` | Limit of how many samples should be scraped from every Pod | `false` |
| `metrics.podMonitor.targetLimit` | Limit of how many targets should be scraped | `false` | | `metrics.podMonitor.targetLimit` | Limit of how many targets should be scraped | `false` |
| `metrics.podMonitor.additionalEndpoints` | Additional endpoints to scrape (e.g sentinel) | `[]` |
| `metrics.prometheusRule.enabled` | Create a custom prometheusRule Resource for scraping metrics using PrometheusOperator | `false` | | `metrics.prometheusRule.enabled` | Create a custom prometheusRule Resource for scraping metrics using PrometheusOperator | `false` |
| `metrics.prometheusRule.namespace` | The namespace in which the prometheusRule will be created | `""` | | `metrics.prometheusRule.namespace` | The namespace in which the prometheusRule will be created | `""` |
| `metrics.prometheusRule.additionalLabels` | Additional labels for the prometheusRule | `{}` | | `metrics.prometheusRule.additionalLabels` | Additional labels for the prometheusRule | `{}` |
@@ -565,16 +594,25 @@ The command removes all the Kubernetes components associated with the chart and
### Init Container Parameters ### Init Container Parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------ | -------------------------- | | ----------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
| `volumePermissions.enabled` | Enable init container that changes the owner/group of the PV mount point to `runAsUser:fsGroup` | `false` | | `volumePermissions.enabled` | Enable init container that changes the owner/group of the PV mount point to `runAsUser:fsGroup` | `false` |
| `volumePermissions.image.registry` | OS Shell + Utility image registry | `REGISTRY_NAME` | | `volumePermissions.image.registry` | OS Shell + Utility image registry | `REGISTRY_NAME` |
| `volumePermissions.image.repository` | OS Shell + Utility image repository | `REPOSITORY_NAME/os-shell` | | `volumePermissions.image.repository` | OS Shell + Utility image repository | `REPOSITORY_NAME/os-shell` |
| `volumePermissions.image.digest` | OS Shell + Utility image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` | | `volumePermissions.image.digest` | OS Shell + Utility image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
| `volumePermissions.image.pullPolicy` | OS Shell + Utility image pull policy | `IfNotPresent` | | `volumePermissions.image.pullPolicy` | OS Shell + Utility image pull policy | `IfNotPresent` |
| `volumePermissions.image.pullSecrets` | OS Shell + Utility image pull secrets | `[]` | | `volumePermissions.image.pullSecrets` | OS Shell + Utility image pull secrets | `[]` |
| `volumePermissions.resources.limits` | The resources limits for the init container | `{}` | | `volumePermissions.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production). | `none` |
| `volumePermissions.resources.requests` | The requested resources for the init container | `{}` | | `volumePermissions.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
| `volumePermissions.containerSecurityContext.seLinuxOptions` | Set SELinux options in container | `nil` |
| `volumePermissions.containerSecurityContext.runAsUser` | Set init container's Security Context runAsUser | `0` | | `volumePermissions.containerSecurityContext.runAsUser` | Set init container's Security Context runAsUser | `0` |
| `kubectl.image.registry` | Kubectl image registry | `REGISTRY_NAME` |
| `kubectl.image.repository` | Kubectl image repository | `REPOSITORY_NAME/kubectl` |
| `kubectl.image.digest` | Kubectl image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | `""` |
| `kubectl.image.pullPolicy` | Kubectl image pull policy | `IfNotPresent` |
| `kubectl.image.pullSecrets` | Kubectl pull secrets | `[]` |
| `kubectl.command` | kubectl command to execute | `["/opt/bitnami/scripts/kubectl-scripts/update-master-label.sh"]` |
| `kubectl.resources.limits` | The resources limits for the kubectl containers | `{}` |
| `kubectl.resources.requests` | The requested resources for the kubectl containers | `{}` |
| `sysctl.enabled` | Enable init container to modify Kernel settings | `false` | | `sysctl.enabled` | Enable init container to modify Kernel settings | `false` |
| `sysctl.image.registry` | OS Shell + Utility image registry | `REGISTRY_NAME` | | `sysctl.image.registry` | OS Shell + Utility image registry | `REGISTRY_NAME` |
| `sysctl.image.repository` | OS Shell + Utility image repository | `REPOSITORY_NAME/os-shell` | | `sysctl.image.repository` | OS Shell + Utility image repository | `REPOSITORY_NAME/os-shell` |
@@ -583,8 +621,8 @@ The command removes all the Kubernetes components associated with the chart and
| `sysctl.image.pullSecrets` | OS Shell + Utility image pull secrets | `[]` | | `sysctl.image.pullSecrets` | OS Shell + Utility image pull secrets | `[]` |
| `sysctl.command` | Override default init-sysctl container command (useful when using custom images) | `[]` | | `sysctl.command` | Override default init-sysctl container command (useful when using custom images) | `[]` |
| `sysctl.mountHostSys` | Mount the host `/sys` folder to `/host-sys` | `false` | | `sysctl.mountHostSys` | Mount the host `/sys` folder to `/host-sys` | `false` |
| `sysctl.resources.limits` | The resources limits for the init container | `{}` | | `sysctl.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if sysctl.resources is set (sysctl.resources is recommended for production). | `none` |
| `sysctl.resources.requests` | The requested resources for the init container | `{}` | | `sysctl.resources` | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | `{}` |
### useExternalDNS Parameters ### useExternalDNS Parameters
@@ -616,11 +654,17 @@ helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/redis
``` ```
> Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`. > Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
> **Tip**: You can use the default [values.yaml](values.yaml) > **Tip**: You can use the default [values.yaml](https://github.com/bitnami/charts/tree/main/bitnami/redis/values.yaml)
## Configuration and installation details ## Configuration and installation details
### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) ### Resource requests and limits
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the `resources` value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the `resourcesPreset` values, which automatically sets the `resources` section according to different presets. Check these presets in [the bitnami/common chart](https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15). However, in production workloads using `resourcePreset` is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
### [Rolling VS Immutable tags](https://docs.bitnami.com/tutorials/understand-rolling-tags-containers)
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
@@ -628,7 +672,7 @@ Bitnami will release a new chart updating its containers if a new version of the
### Use a different Redis&reg; version ### Use a different Redis&reg; version
To modify the application version used in this chart, specify a different version of the image using the `image.tag` parameter and/or a different repository using the `image.repository` parameter. Refer to the [chart documentation for more information on these parameters and how to use them with images from a private registry](https://docs.bitnami.com/kubernetes/infrastructure/redis/configuration/change-image-version/). To modify the application version used in this chart, specify a different version of the image using the `image.tag` parameter and/or a different repository using the `image.repository` parameter.
### Bootstrapping with an External Cluster ### Bootstrapping with an External Cluster
@@ -730,13 +774,27 @@ It's recommended to only change `master.count` if you know what you are doing.
### Using a password file ### Using a password file
To use a password file for Redis&reg; you need to create a secret containing the password and then deploy the chart using that secret. To use a password file for Redis&reg; you need to create a secret containing the password and then deploy the chart using that secret. Follow these instructions:
Refer to the chart documentation for more information on [using a password file for Redis&reg;](https://docs.bitnami.com/kubernetes/infrastructure/redis/administration/use-password-file/). - Create the secret with the password. It is important that the file with the password must be called `redis-password`.
```console
kubectl create secret generic redis-password-secret --from-file=redis-password.yaml
```
- Deploy the Helm Chart using the secret name as parameter:
```text
usePassword=true
usePasswordFile=true
existingSecret=redis-password-secret
sentinels.enabled=true
metrics.enabled=true
```
### Securing traffic using TLS ### Securing traffic using TLS
TLS support can be enabled in the chart by specifying the `tls.` parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the chart: TLS support can be enabled in the chart by specifying the `tls.` parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the cluster:
- `tls.enabled`: Enable TLS support. Defaults to `false` - `tls.enabled`: Enable TLS support. Defaults to `false`
- `tls.existingSecret`: Name of the secret that contains the certificates. No defaults. - `tls.existingSecret`: Name of the secret that contains the certificates. No defaults.
@@ -744,7 +802,23 @@ TLS support can be enabled in the chart by specifying the `tls.` parameters whil
- `tls.certKeyFilename`: Certificate key filename. No defaults. - `tls.certKeyFilename`: Certificate key filename. No defaults.
- `tls.certCAFilename`: CA Certificate filename. No defaults. - `tls.certCAFilename`: CA Certificate filename. No defaults.
Refer to the chart documentation for more information on [creating the secret and a TLS deployment example](https://docs.bitnami.com/kubernetes/infrastructure/redis/administration/enable-tls/). For example:
First, create the secret with the certificates files:
```console
kubectl create secret generic certificates-tls-secret --from-file=./cert.pem --from-file=./cert.key --from-file=./ca.pem
```
Then, use the following parameters:
```console
tls.enabled="true"
tls.existingSecret="certificates-tls-secret"
tls.certFilename="cert.pem"
tls.certKeyFilename="cert.key"
tls.certCAFilename="ca.pem"
```
### Metrics ### Metrics
@@ -760,11 +834,65 @@ tls-client-cert-file
tls-ca-cert-file tls-ca-cert-file
``` ```
### Deploy a custom metrics script in the sidecar
A custom Lua script can be added to the `redis-exporter` sidecar by way of the `metrics.extraArgs.script` parameter. The pathname of the script must exist on the container, or the `redis_exporter` process (and therefore the whole pod) will refuse to start. The script can be provided to the sidecar containers via the `metrics.extraVolumes` and `metrics.extraVolumeMounts` parameters:
```yaml
metrics:
extraVolumeMounts:
- name: '{{ printf "%s-metrics-script-file" (include "common.names.fullname" .) }}'
mountPath: '{{ printf "/mnt/%s/" (include "common.names.name" .) }}'
readOnly: true
extraVolumes:
- name: '{{ printf "%s-metrics-script-file" (include "common.names.fullname" .) }}'
configMap:
name: '{{ printf "%s-metrics-script" (include "common.names.fullname" .) }}'
extraArgs:
script: '{{ printf "/mnt/%s/my_custom_metrics.lua" (include "common.names.name" .) }}'
```
Then deploy the script into the correct location via `extraDeploy`:
```yaml
extraDeploy:
- apiVersion: v1
kind: ConfigMap
metadata:
name: '{{ printf "%s-metrics-script" (include "common.names.fullname" .) }}'
data:
my_custom_metrics.lua: |
-- LUA SCRIPT CODE HERE, e.g.,
return {'bitnami_makes_the_best_charts', '1'}
```
### Host Kernel Settings ### Host Kernel Settings
Redis&reg; may require some changes in the kernel of the host machine to work as expected, in particular increasing the `somaxconn` value and disabling transparent huge pages. Redis&reg; may require some changes in the kernel of the host machine to work as expected, in particular increasing the `somaxconn` value and disabling transparent huge pages. To do so, you can set up a privileged `initContainer` with the `sysctlImage` config values, for example:
Refer to the chart documentation for more information on [configuring host kernel settings with an example](https://docs.bitnami.com/kubernetes/infrastructure/redis/administration/configure-kernel-settings/). ```yaml
sysctlImage:
enabled: true
mountHostSys: true
command:
- /bin/sh
- -c
- |-
install_packages procps
sysctl -w net.core.somaxconn=10000
echo never > /host-sys/kernel/mm/transparent_hugepage/enabled
```
Alternatively, for Kubernetes 1.12+ you can set `securityContext.sysctls` which will configure `sysctls` for master and slave pods. Example:
```yaml
securityContext:
sysctls:
- name: net.core.somaxconn
value: "10000"
```
Note that this will not disable transparent huge tables.
## Persistence ## Persistence
@@ -784,13 +912,115 @@ helm install my-release --set master.persistence.existingClaim=PVC_NAME oci://RE
## Backup and restore ## Backup and restore
Refer to the chart documentation for more information on [backing up and restoring Redis&reg; deployments](https://docs.bitnami.com/kubernetes/infrastructure/redis/administration/backup-restore/). To backup and restore Redis deployments on Kubernetes, you will need to create a snapshot of the data in the source cluster, and later restore it in a new cluster with the new parameters. Follow the instructions below:
### Step 1: Backup the deployment
- Connect to one of the nodes and start the Redis CLI tool. Then, run the commands below:
```text
$ kubectl exec -it my-release-master-0 bash
$ redis-cli
127.0.0.1:6379> auth your_current_redis_password
OK
127.0.0.1:6379> save
OK
```
- Copy the dump file from the Redis node:
```console
kubectl cp my-release-master-0:/data/dump.rdb dump.rdb -c redis
```
### Step 2: Restore the data on the destination cluster
To restore the data in a new cluster, you will need to create a PVC and then upload the *dump.rdb* file to the new volume.
Follow the following steps:
- In the [*values.yaml*](https://github.com/bitnami/charts/blob/main/bitnami/redis/values.yaml) file set the *appendonly* parameter to *no*. You can skip this step if it is already configured as *no*
```yaml
commonConfiguration: |-
# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly no
# Disable RDB persistence, AOF persistence already enabled.
save ""
```
> *Note that the `Enable AOF` comment belongs to the original config file and what you're actually doing is disabling it. This change will only be neccessary for the temporal cluster you're creating to upload the dump.*
- Start the new cluster to create the PVCs. Use the command below as an example:
```console
helm install new-redis -f values.yaml . --set cluster.enabled=true --set cluster.slaveCount=3
```
- Now that the PVC were created, stop it and copy the *dump.rdp* file on the persisted data by using a helping pod.
```text
$ helm delete new-redis
$ kubectl run --generator=run-pod/v1 -i --rm --tty volpod --overrides='
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "redisvolpod"
},
"spec": {
"containers": [{
"command": [
"tail",
"-f",
"/dev/null"
],
"image": "bitnami/minideb",
"name": "mycontainer",
"volumeMounts": [{
"mountPath": "/mnt",
"name": "redisdata"
}]
}],
"restartPolicy": "Never",
"volumes": [{
"name": "redisdata",
"persistentVolumeClaim": {
"claimName": "redis-data-new-redis-master-0"
}
}]
}
}' --image="bitnami/minideb"
$ kubectl cp dump.rdb redisvolpod:/mnt/dump.rdb
$ kubectl delete pod volpod
```
- Restart the cluster:
> **INFO:** The *appendonly* parameter can be safely restored to your desired value.
```console
helm install new-redis -f values.yaml . --set cluster.enabled=true --set cluster.slaveCount=3
```
## NetworkPolicy ## NetworkPolicy
To enable network policy for Redis&reg;, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`. To enable network policy for Redis&reg;, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`.
Refer to the chart documenation for more information on [enabling the network policy in Redis&reg; deployments](https://docs.bitnami.com/kubernetes/infrastructure/redis/administration/enable-network-policy/). With NetworkPolicy enabled, only pods with the generated client label will be able to connect to Redis. This label will be displayed in the output after a successful install.
With `networkPolicy.ingressNSMatchLabels` pods from other namespaces can connect to Redis. Set `networkPolicy.ingressNSPodMatchLabels` to match pod labels in matched namespace. For example, for a namespace labeled `redis=external` and pods in that namespace labeled `redis-client=true` the fields should be set:
```yaml
networkPolicy:
enabled: true
ingressNSMatchLabels:
redis: external
ingressNSPodMatchLabels:
redis-client: true
```
### Setting Pod's affinity ### Setting Pod's affinity
@@ -864,7 +1094,7 @@ The Redis&reg; sentinel exporter was removed in this version because the upstrea
- `sentinel.metrics.*` parameters are deprecated in favor of `metrics.sentinel.*` ones. - `sentinel.metrics.*` parameters are deprecated in favor of `metrics.sentinel.*` ones.
- New parameters to add custom command, environment variables, sidecars, init containers, etc. were added. - New parameters to add custom command, environment variables, sidecars, init containers, etc. were added.
- Chart labels were adapted to follow the [Helm charts standard labels](https://helm.sh/docs/chart_best_practices/labels/#standard-labels). - Chart labels were adapted to follow the [Helm charts standard labels](https://helm.sh/docs/chart_best_practices/labels/#standard-labels).
- values.yaml metadata was adapted to follow the format supported by [Readme Generator for Helm](https://github.com/bitnami-labs/readme-generator-for-helm). - values.yaml metadata was adapted to follow the format supported by [Readme Generator for Helm](https://github.com/bitnami/readme-generator-for-helm).
Consequences: Consequences:
@@ -1004,7 +1234,7 @@ kubectl patch deployments my-release-redis-metrics --type=json -p='[{"op": "remo
## License ## License
Copyright &copy; 2023 VMware, Inc. Copyright &copy; 2024 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.

Some files were not shown because too many files have changed in this diff Show More