Upd Velero v1.17.0 (#1484)

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

- enables nodeAgent by default
- fixes https://github.com/cozystack/cozystack/issues/1442

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[]
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- New Features
- Per-repository maintenance via ConfigMap with global and repo-specific
settings.
- PodVolumeBackup/Restore: cancel requests, progress reporting,
node/uploader visibility, expanded phases.
  - New volumeGroupSnapshotLabelKey on Backups and Schedules.
  - DataUpload: specify CSI driver.
  - Metrics Service: ipFamilyPolicy and ipFamilies support.
  - Optional container resizePolicy.

- Changes
  - Upgraded to Velero 1.17.0; Helm chart v11.0.0.
  - Deployment name standardized to “velero”.
  - Node agent enabled by default.
  - Templates now block deprecated options with clear error messages.

- Documentation
- Expanded README on repository maintenance, deprecations, and upgrade
guidance.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
This commit is contained in:
Andrei Kvapil
2025-10-06 21:00:45 +02:00
committed by GitHub
17 changed files with 450 additions and 126 deletions

View File

@@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 1.16.1
appVersion: 1.17.0
description: A Helm chart for velero
home: https://github.com/vmware-tanzu/velero
icon: https://cdn-images-1.medium.com/max/1600/1*-9mb3AKnKdcL_QD3CMnthQ.png
@@ -14,4 +14,4 @@ maintainers:
name: velero
sources:
- https://github.com/vmware-tanzu/velero
version: 10.0.5
version: 11.0.0

View File

@@ -102,6 +102,107 @@ CSI plugin has been merged into velero repo in v1.14 release. It will be install
This version removes the `nodeAgent.privileged` field, you should use `nodeAgent.containerSecurityContext.privileged` instead
## Repository Maintenance Configuration
Starting from Velero v1.15, you can configure repository maintenance jobs with different resource limits and node affinity settings per repository using a ConfigMap. This feature is supported through the Helm chart.
### Basic Usage
To enable per-repository maintenance configuration, provide repository-specific configurations and provide global configurations that will be applied across all repositories:
```yaml
configuration:
repositoryMaintenanceJob:
repositoryConfigData:
name: "my-repo-maintenance-config" # Optional, defaults to "velero-repo-maintenance"
global:
podResources:
cpuRequest: "100m"
cpuLimit: "200m"
memoryRequest: "100Mi"
memoryLimit: "200Mi"
keepLatestMaintenanceJobs: 1
```
### Per-Repository Configuration
You can configure specific settings for individual repositories using repository keys. Repository keys are formed as: `{namespace}-{storageLocation}-{repositoryType}`.
For example, if you have a BackupRepository for namespace `prod` using storage location `s3-backup` with repository type `kopia`, the key would be `prod-s3-backup-kopia`.
```yaml
configuration:
repositoryMaintenanceJob:
repositoryConfigData:
global:
podResources:
cpuRequest: "100m"
cpuLimit: "200m"
memoryRequest: "100Mi"
memoryLimit: "200Mi"
repositories:
"prod-s3-backup-kopia":
podResources:
cpuRequest: "500m"
cpuLimit: "1000m"
memoryRequest: "512Mi"
memoryLimit: "1024Mi"
loadAffinity:
- nodeSelector:
matchLabels:
dedicated: "backup"
```
### Node Affinity and Priority Class
You can specify node affinity and priority class for maintenance jobs:
```yaml
configuration:
repositoryMaintenanceJob:
repositoryConfigData:
global:
podResources:
cpuRequest: "100m"
cpuLimit: "200m"
memoryRequest: "100Mi"
memoryLimit: "200Mi"
loadAffinity:
- nodeSelector:
matchExpressions:
- key: "cloud.google.com/machine-family"
operator: "In"
values: ["e2"]
- nodeSelector:
matchExpressions:
- key: "topology.kubernetes.io/zone"
operator: "In"
values: ["us-central1-a", "us-central1-b", "us-central1-c"]
priorityClassName: "low-priority"
```
**Note**: `priorityClassName` is only supported in the global configuration section and applies to all maintenance jobs.
### Backward Compatibility
When `repositoryConfigData.global` and `repositoryConfigData.repositories` are not provided (default), the chart continues to use the legacy global settings:
```yaml
configuration:
repositoryMaintenanceJob:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1024Mi
latestJobsCount: 3
```
Note: The legacy parameters (`--maintenance-job-cpu-request`, `--maintenance-job-mem-request`, `--maintenance-job-cpu-limit`, `--maintenance-job-mem-limit`) are deprecated in Velero v1.15 and will be removed in v1.17.
For more information, see the [Velero Repository Maintenance documentation](https://velero.io/docs/main/repository-maintenance/).
## Upgrading Velero
### Upgrading to v1.16

View File

@@ -73,7 +73,7 @@ spec:
resticIdentifier:
description: |-
ResticIdentifier is the full restic-compatible string for identifying
this repository.
this repository. This field is only used when RepositoryType is "restic".
type: string
volumeNamespace:
description: |-
@@ -83,7 +83,6 @@ spec:
required:
- backupStorageLocation
- maintenanceFrequency
- resticIdentifier
- volumeNamespace
type: object
status:

View File

@@ -512,6 +512,10 @@ spec:
uploads to perform when using the uploader.
type: integer
type: object
volumeGroupSnapshotLabelKey:
description: VolumeGroupSnapshotLabelKey specifies the label key
to group PVCs under a VGS.
type: string
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names
of VolumeSnapshotLocations associated with this backup.

View File

@@ -89,6 +89,9 @@ spec:
of the CSI snapshot.
nullable: true
properties:
driver:
description: Driver is the driver used by the VolumeSnapshotContent
type: string
snapshotClass:
description: SnapshotClass is the name of the snapshot class
that the volume snapshot is created with

View File

@@ -17,38 +17,41 @@ spec:
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Pod Volume Backup status such as New/InProgress
- description: PodVolumeBackup status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Time when this backup was started
- description: Time duration since this PodVolumeBackup was started
jsonPath: .status.startTimestamp
name: Created
name: Started
type: date
- description: Namespace of the pod containing the volume to be backed up
jsonPath: .spec.pod.namespace
name: Namespace
type: string
- description: Name of the pod containing the volume to be backed up
jsonPath: .spec.pod.name
name: Pod
type: string
- description: Name of the volume to be backed up
jsonPath: .spec.volume
name: Volume
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader Type
type: string
- description: Completed bytes
format: int64
jsonPath: .status.progress.bytesDone
name: Bytes Done
type: integer
- description: Total bytes
format: int64
jsonPath: .status.progress.totalBytes
name: Total Bytes
type: integer
- description: Name of the Backup Storage Location where this backup should
be stored
jsonPath: .spec.backupStorageLocation
name: Storage Location
type: string
- jsonPath: .metadata.creationTimestamp
- description: Time duration since this PodVolumeBackup was created
jsonPath: .metadata.creationTimestamp
name: Age
type: date
- description: Name of the node where the PodVolumeBackup is processed
jsonPath: .status.node
name: Node
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader
type: string
name: v1
schema:
openAPIV3Schema:
@@ -78,6 +81,11 @@ spec:
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
type: string
cancel:
description: |-
Cancel indicates request to cancel the ongoing PodVolumeBackup. It can be set
when the PodVolumeBackup is in InProgress phase
type: boolean
node:
description: Node is the name of the node that the Pod is running
on.
@@ -167,6 +175,13 @@ spec:
status:
description: PodVolumeBackupStatus is the current status of a PodVolumeBackup.
properties:
acceptedTimestamp:
description: |-
AcceptedTimestamp records the time the pod volume backup is to be prepared.
The server's time is used for AcceptedTimestamp
format: date-time
nullable: true
type: string
completionTimestamp:
description: |-
CompletionTimestamp records the time a backup was completed.
@@ -188,7 +203,11 @@ spec:
description: Phase is the current state of the PodVolumeBackup.
enum:
- New
- Accepted
- Prepared
- InProgress
- Canceling
- Canceled
- Completed
- Failed
type: string

View File

@@ -17,39 +17,41 @@ spec:
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Namespace of the pod containing the volume to be restored
jsonPath: .spec.pod.namespace
name: Namespace
- description: PodVolumeRestore status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Name of the pod containing the volume to be restored
jsonPath: .spec.pod.name
name: Pod
- description: Time duration since this PodVolumeRestore was started
jsonPath: .status.startTimestamp
name: Started
type: date
- description: Completed bytes
format: int64
jsonPath: .status.progress.bytesDone
name: Bytes Done
type: integer
- description: Total bytes
format: int64
jsonPath: .status.progress.totalBytes
name: Total Bytes
type: integer
- description: Name of the Backup Storage Location where the backup data is
stored
jsonPath: .spec.backupStorageLocation
name: Storage Location
type: string
- description: Time duration since this PodVolumeRestore was created
jsonPath: .metadata.creationTimestamp
name: Age
type: date
- description: Name of the node where the PodVolumeRestore is processed
jsonPath: .status.node
name: Node
type: string
- description: The type of the uploader to handle data transfer
jsonPath: .spec.uploaderType
name: Uploader Type
type: string
- description: Name of the volume to be restored
jsonPath: .spec.volume
name: Volume
type: string
- description: Pod Volume Restore status such as New/InProgress
jsonPath: .status.phase
name: Status
type: string
- description: Pod Volume Restore status such as New/InProgress
format: int64
jsonPath: .status.progress.totalBytes
name: TotalBytes
type: integer
- description: Pod Volume Restore status such as New/InProgress
format: int64
jsonPath: .status.progress.bytesDone
name: BytesDone
type: integer
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
schema:
openAPIV3Schema:
@@ -79,6 +81,11 @@ spec:
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
type: string
cancel:
description: |-
Cancel indicates request to cancel the ongoing PodVolumeRestore. It can be set
when the PodVolumeRestore is in InProgress phase
type: boolean
pod:
description: Pod is a reference to the pod containing the volume
to be restored.
@@ -164,6 +171,13 @@ spec:
status:
description: PodVolumeRestoreStatus is the current status of a PodVolumeRestore.
properties:
acceptedTimestamp:
description: |-
AcceptedTimestamp records the time the pod volume restore is to be prepared.
The server's time is used for AcceptedTimestamp
format: date-time
nullable: true
type: string
completionTimestamp:
description: |-
CompletionTimestamp records the time a restore was completed.
@@ -176,11 +190,19 @@ spec:
description: Message is a message about the pod volume restore's
status.
type: string
node:
description: Node is name of the node where the pod volume restore
is processed.
type: string
phase:
description: Phase is the current state of the PodVolumeRestore.
enum:
- New
- Accepted
- Prepared
- InProgress
- Canceling
- Canceled
- Completed
- Failed
type: string

View File

@@ -553,6 +553,10 @@ spec:
parallel uploads to perform when using the uploader.
type: integer
type: object
volumeGroupSnapshotLabelKey:
description: VolumeGroupSnapshotLabelKey specifies the label
key to group PVCs under a VGS.
type: string
volumeSnapshotLocations:
description: VolumeSnapshotLocations is a list containing names
of VolumeSnapshotLocations associated with this backup.

View File

@@ -75,6 +75,22 @@ More info on the official site: https://velero.io/docs
{{- end }}
{{- end }}
{{- if eq .Values.configuration.uploaderType "restic" }}
{{- $breaking = print $breaking "\n\nERROR: restic uploaderType was removed, please use a different one" }}
{{- end }}
{{- if hasKey .Values.configuration.repositoryMaintenanceJob "requests" }}
{{- $breaking = print $breaking "\n\nREMOVED: configuration.repositoryMaintenanceJob.requests has been removed, please use the configmap" }}
{{- end }}
{{- if hasKey .Values.configuration.repositoryMaintenanceJob "limits" }}
{{- $breaking = print $breaking "\n\nREMOVED: configuration.repositoryMaintenanceJob.limits has been removed, please use the configmap" }}
{{- end }}
{{- if hasKey .Values.configuration.repositoryMaintenanceJob "latestJobsCount" }}
{{- $breaking = print $breaking "\n\nREMOVED: configuration.repositoryMaintenanceJob.latestJobsCount has been removed, please use the configmap" }}
{{- end }}
{{- if $breaking }}
{{- fail (print $breaking_title $breaking) }}
{{- end }}

View File

@@ -3,7 +3,7 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "velero.fullname" . }}
name: velero
namespace: {{ .Release.Namespace }}
{{- with .Values.annotations }}
annotations:
@@ -21,7 +21,7 @@ metadata:
{{- end }}
spec:
replicas: 1
{{- if .Values.revisionHistoryLimit }}
{{- if hasKey .Values "revisionHistoryLimit" }}
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
{{- end }}
strategy:
@@ -57,7 +57,7 @@ spec:
{{- end }}
{{- end }}
spec:
{{- with .Values.hostAliases -}}
{{- with .Values.hostAliases }}
hostAliases:
{{- toYaml . | nindent 8 }}
{{- end }}
@@ -180,24 +180,9 @@ spec:
- --namespace={{ . }}
{{- end }}
{{- with .repositoryMaintenanceJob }}
{{- with .requests }}
{{- with .cpu }}
- --maintenance-job-cpu-request={{ . }}
{{- end }}
{{- with .memory }}
- --maintenance-job-mem-request={{ . }}
{{- end }}
{{- end }}
{{- with .limits }}
{{- with .cpu }}
- --maintenance-job-cpu-limit={{ . }}
{{- end }}
{{- with .memory }}
- --maintenance-job-mem-limit={{ . }}
{{- end }}
{{- end }}
{{- with .latestJobsCount }}
- --keep-latest-maintenance-jobs={{ . }}
{{- if and .repositoryConfigData (or .repositoryConfigData.global .repositoryConfigData.repositories) }}
- --repo-maintenance-job-configmap={{ default "velero-repo-maintenance" .repositoryConfigData.name }}
{{- else }}
{{- end }}
{{- end }}
{{- with .extraArgs }}
@@ -209,6 +194,10 @@ spec:
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.resizePolicy }}
resizePolicy:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.metrics.enabled }}
{{- with .Values.livenessProbe }}
livenessProbe: {{- toYaml . | nindent 12 }}

View File

@@ -31,8 +31,8 @@ spec:
- /bin/sh
- -c
- |
{{- range .Values.namespace.labels }}
kubectl label namespace {{ $.Release.Namespace }} {{ .key }}={{ .value }}
{{- range $key, $value := .Values.namespace.labels }}
kubectl label namespace {{ $.Release.Namespace }} {{ $key }}={{ $value }}
{{- end }}
{{- if .Values.kubectl.extraVolumeMounts }}
volumeMounts:

View File

@@ -49,7 +49,7 @@ spec:
{{- end }}
{{- end }}
spec:
{{ with .Values.hostAliases -}}
{{- with .Values.nodeAgent.hostAliases }}
hostAliases:
{{- toYaml . | nindent 8 }}
{{- end }}
@@ -68,7 +68,7 @@ spec:
{{- if .Values.nodeAgent.priorityClassName }}
priorityClassName: {{ include "velero.nodeAgent.priorityClassName" . }}
{{- end }}
{{- if .Values.runtimeClassName }}
{{- if .Values.nodeAgent.runtimeClassName }}
runtimeClassName: {{ include "velero.nodeAgent.runtimeClassName" . }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
@@ -197,6 +197,10 @@ spec:
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.nodeAgent.resizePolicy }}
resizePolicy:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.nodeAgent.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}

View File

@@ -0,0 +1,21 @@
{{- if and .Values.configuration .Values.configuration.repositoryMaintenanceJob .Values.configuration.repositoryMaintenanceJob.repositoryConfigData (or .Values.configuration.repositoryMaintenanceJob.repositoryConfigData.global .Values.configuration.repositoryMaintenanceJob.repositoryConfigData.repositories) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ default "velero-repo-maintenance" .Values.configuration.repositoryMaintenanceJob.repositoryConfigData.name }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ include "velero.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "velero.chart" . }}
data:
{{- with .Values.configuration.repositoryMaintenanceJob.repositoryConfigData.global }}
global: |
{{- . | toPrettyJson | nindent 4 }}
{{- end }}
{{- range $repoKey, $repoConfig := .Values.configuration.repositoryMaintenanceJob.repositoryConfigData.repositories }}
{{ $repoKey }}: |
{{- $repoConfig | toPrettyJson | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -23,6 +23,12 @@ spec:
{{- if .Values.metrics.service.internalTrafficPolicy }}
internalTrafficPolicy: {{ .Values.metrics.service.internalTrafficPolicy }}
{{- end }}
{{- if .Values.metrics.service.ipFamilyPolicy }}
ipFamilyPolicy: {{ .Values.metrics.service.ipFamilyPolicy }}
{{- end }}
{{- if .Values.metrics.service.ipFamilies }}
ipFamilies: {{ toYaml .Values.metrics.service.ipFamilies | nindent 4 }}
{{- end }}
type: {{ .Values.metrics.service.type }}
ports:
- name: http-monitoring

View File

@@ -105,7 +105,7 @@
"type": "string"
},
"initContainers": {
"type": ["array", "null"],
"type": ["array", "string", "null"],
"items": {}
},
"podSecurityContext": {
@@ -367,10 +367,10 @@
"type": ["string", "null"]
},
"provider": {
"type": ["string", "null"]
"type": ["string"]
},
"bucket": {
"type": ["string", "null"]
"type": ["string"]
},
"caCert": {
"type": ["string", "null"]
@@ -405,8 +405,7 @@
},
"required": [
"provider",
"bucket",
"config"
"bucket"
]
}
},
@@ -419,7 +418,7 @@
"type": ["string", "null"]
},
"provider": {
"type": ["string", "null"]
"type": ["string"]
},
"credential": {
"type": "object",
@@ -438,9 +437,7 @@
}
},
"required": [
"name",
"provider",
"config"
"provider"
]
}
},
@@ -473,6 +470,80 @@
},
"latestJobsCount": {
"type": "number"
},
"repositoryConfigData": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of the ConfigMap to create for per-repository maintenance configuration"
},
"global": {
"type": "object",
"description": "Global configuration applied to all repositories when no specific repository configuration is found",
"properties": {
"podResources": {
"type": "object",
"properties": {
"cpuRequest": {"type": "string"},
"cpuLimit": {"type": "string"},
"memoryRequest": {"type": "string"},
"memoryLimit": {"type": "string"}
}
},
"keepLatestMaintenanceJobs": {
"type": "number"
},
"priorityClassName": {
"type": "string"
},
"loadAffinity": {
"type": "array",
"items": {
"type": "object",
"properties": {
"nodeSelector": {
"type": "object"
}
}
}
}
}
},
"repositories": {
"type": "object",
"description": "Repository-specific configurations keyed by repository identifier (namespace-storageLocation-repositoryType)",
"additionalProperties": {
"type": "object",
"properties": {
"podResources": {
"type": "object",
"properties": {
"cpuRequest": {"type": "string"},
"cpuLimit": {"type": "string"},
"memoryRequest": {"type": "string"},
"memoryLimit": {"type": "string"}
}
},
"keepLatestMaintenanceJobs": {
"type": "number"
},
"loadAffinity": {
"type": "array",
"items": {
"type": "object",
"properties": {
"nodeSelector": {
"type": "object"
}
}
}
}
}
}
}
},
"required": []
}
},
"required": []
@@ -688,7 +759,6 @@
},
"required": [
"podVolumePath",
"pluginVolumePath",
"dnsPolicy"
]
},
@@ -732,4 +802,4 @@
]
}
}
}
}

View File

@@ -7,18 +7,12 @@ namespace:
labels: {}
# Enforce Pod Security Standards with Namespace Labels
# https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/
# - key: pod-security.kubernetes.io/enforce
# value: privileged
# - key: pod-security.kubernetes.io/enforce-version
# value: latest
# - key: pod-security.kubernetes.io/audit
# value: privileged
# - key: pod-security.kubernetes.io/audit-version
# value: latest
# - key: pod-security.kubernetes.io/warn
# value: privileged
# - key: pod-security.kubernetes.io/warn-version
# value: latest
# pod-security.kubernetes.io/enforce: privileged
# pod-security.kubernetes.io/enforce-version: latest
# pod-security.kubernetes.io/audit: privileged
# pod-security.kubernetes.io/audit-version: latest
# pod-security.kubernetes.io/warn: privileged
# pod-security.kubernetes.io/warn-version: latest
##
## End of namespace-related settings.
@@ -33,7 +27,7 @@ namespace:
# enabling node-agent). Required.
image:
repository: velero/velero
tag: v1.16.1
tag: v1.17.0
# Digest value example: sha256:d238835e151cec91c6a811fe3a89a66d3231d9f64d09e5f3c49552672d271f38.
# If used, it will take precedence over the image.tag.
# digest:
@@ -81,6 +75,14 @@ resources: {}
# cpu: 1000m
# memory: 512Mi
# Container resize policy for the Velero deployment.
# See: https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/
resizePolicy: []
# - resourceName: cpu
# restartPolicy: NotRequired
# - resourceName: memory
# restartPolicy: RestartContainer
# Configure hostAliases for Velero deployment. Optional
# For more information, check: https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
hostAliases: []
@@ -128,7 +130,7 @@ dnsPolicy: ClusterFirst
# If the value is a string then it is evaluated as a template.
initContainers:
# - name: velero-plugin-for-aws
# image: velero/velero-plugin-for-aws:v1.12.1
# image: velero/velero-plugin-for-aws:v1.13.0
# imagePullPolicy: IfNotPresent
# volumeMounts:
# - mountPath: /target
@@ -247,6 +249,11 @@ metrics:
externalTrafficPolicy: ""
internalTrafficPolicy: ""
# the IP family policy for the metrics Service to be able to configure dual-stack; see [Configure dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services).
ipFamilyPolicy: ""
# a list of IP families for the metrics Service that should be supported, in the order in which they should be applied to ClusterIP. Can be "IPv4" and/or "IPv6".
ipFamilies: []
# Pod annotations for Prometheus
podAnnotations:
prometheus.io/scrape: "true"
@@ -291,26 +298,47 @@ metrics:
# namespace: ""
# Rules to be deployed
spec: []
# - alert: VeleroBackupPartialFailures
# - alert: VeleroBackupFailed
# annotations:
# message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} partialy failed backups.
# message: Velero backup {{ $labels.schedule }} has failed
# expr: |-
# velero_backup_partial_failure_total{schedule!=""} / velero_backup_attempt_total{schedule!=""} > 0.25
# velero_backup_last_status{schedule!=""} != 1
# for: 15m
# labels:
# severity: warning
# - alert: VeleroBackupFailures
# - alert: VeleroBackupFailing
# annotations:
# message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} failed backups.
# message: Velero backup {{ $labels.schedule }} has been failing for the last 12h
# expr: |-
# velero_backup_failure_total{schedule!=""} / velero_backup_attempt_total{schedule!=""} > 0.25
# velero_backup_last_status{schedule!=""} != 1
# for: 12h
# labels:
# severity: critical
# - alert: VeleroNoNewBackup
# annotations:
# message: Velero backup {{ $labels.schedule }} has not run successfuly in the last 30h
# expr: |-
# (
# rate(velero_backup_last_successful_timestamp{schedule!=""}[15m]) <=bool 0
# or
# absent(velero_backup_last_successful_timestamp{schedule!=""})
# ) == 1
# for: 30h
# labels:
# severity: critical
# - alert: VeleroBackupPartialFailures
# annotations:
# message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} partialy failed backups
# expr: |-
# rate(velero_backup_partial_failure_total{schedule!=""}[25m])
# / rate(velero_backup_attempt_total{schedule!=""}[25m]) > 0.5
# for: 15m
# labels:
# severity: warning
kubectl:
image:
repository: docker.io/bitnami/kubectl
repository: docker.io/bitnamilegacy/kubectl
# Digest value example: sha256:d238835e151cec91c6a811fe3a89a66d3231d9f64d09e5f3c49552672d271f38.
# If used, it will take precedence over the kubectl.image.tag.
# digest:
@@ -354,9 +382,9 @@ configuration:
# a backup storage location will be created with the name "default". Optional.
- name:
# provider is the name for the backup storage location provider.
provider:
provider: ""
# bucket is the name of the bucket to store backups in. Required.
bucket:
bucket: ""
# caCert defines a base64 encoded CA bundle to use when verifying TLS connections to the provider. Optional.
caCert:
# prefix is the directory under which all Velero data should be stored within the bucket. Optional.
@@ -398,10 +426,11 @@ configuration:
# Parameters for the VolumeSnapshotLocation(s). Configure multiple by adding other element(s) to the volumeSnapshotLocation slice.
# See https://velero.io/docs/v1.6/api-types/volumesnapshotlocation/
volumeSnapshotLocation:
# name is the name of the volume snapshot location where snapshots are being taken. Required.
# name is the name of the volume snapshot location where snapshots are being taken. If a name is not provided,
# a volume snapshot location will be created with the name "default". Optional.
- name:
# provider is the name for the volume snapshot provider.
provider:
provider: ""
credential:
# name of the secret used by this volumeSnapshotLocation.
name:
@@ -483,14 +512,48 @@ configuration:
# Resource requests/limits to specify for the repository-maintenance job. Optional.
# https://velero.io/docs/v1.14/repository-maintenance/#resource-limitation
repositoryMaintenanceJob:
requests:
# cpu: 500m
# memory: 512Mi
limits:
# cpu: 1000m
# memory: 1024Mi
# Number of latest maintenance jobs to keep for each repository
latestJobsCount: 3
# Per-repository resource settings ConfigMap
# This ConfigMap allows specifying different settings for different repositories
# See: https://velero.io/docs/main/repository-maintenance/
repositoryConfigData:
# Name of the ConfigMap to create. If not provided, will use "velero-repo-maintenance"
name: "velero-repo-maintenance"
# Global configuration applied to all repositories
# This configuration is used when no specific repository configuration is found
# global:
# podResources:
# cpuRequest: "100m"
# cpuLimit: "200m"
# memoryRequest: "100Mi"
# memoryLimit: "200Mi"
# keepLatestMaintenanceJobs: 1
# loadAffinity:
# - nodeSelector:
# matchExpressions:
# - key: "cloud.google.com/machine-family"
# operator: "In"
# values: ["e2"]
# - nodeSelector:
# matchExpressions:
# - key: "topology.kubernetes.io/zone"
# operator: "In"
# values: ["us-central1-a", "us-central1-b", "us-central1-c"]
# priorityClassName: "low-priority" # Note: priorityClassName is only supported in global configuration
global:
keepLatestMaintenanceJobs: 3
# Repository-specific configurations
# Repository keys are formed as: "{namespace}-{storageLocation}-{repositoryType}"
# For example: "default-default-kopia" or "prod-s3-backup-kopia"
# Note: priorityClassName is NOT supported in repository-specific configurations
# repositories:
# "kibishii-default-kopia":
# podResources:
# cpuRequest: "200m"
# cpuLimit: "400m"
# memoryRequest: "200Mi"
# memoryLimit: "400Mi"
# keepLatestMaintenanceJobs: 2
repositories: {}
# `velero server` default: velero
namespace:
# additional command-line arguments that will be passed to the `velero server`
@@ -601,6 +664,13 @@ nodeAgent:
# limits:
# cpu: 1000m
# memory: 1024Mi
# Container resize policy for the node-agent daemonset.
# See: https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/
resizePolicy: []
# - resourceName: cpu
# restartPolicy: NotRequired
# - resourceName: memory
# restartPolicy: RestartContainer
# Tolerations to use for the node-agent daemonset. Optional.
tolerations: []
@@ -714,7 +784,7 @@ schedules: {}
# velero.io/plugin-config: ""
# velero.io/pod-volume-restore: RestoreItemAction
# data:
# image: velero/velero-restore-helper:v1.10.2
# image: velero/velero:v1.17.0
# cpuRequest: 200m
# memRequest: 128Mi
# cpuLimit: 200m

View File

@@ -6,14 +6,10 @@ velero:
volumeMounts:
- mountPath: /target
name: plugins
# deployNodeAgent: true
deployNodeAgent: true
configuration:
# defaultVolumesToFsBackup: true
backupStorageLocation: null
volumeSnapshotLocation: null
namespace: cozy-velero
features: EnableCSI
kubectl:
image:
repository: docker.io/alpine/k8s
tag: 1.33.4