mirror of
https://github.com/outbackdingo/qdrant-helm.git
synced 2026-01-27 10:20:18 +00:00
Add docs for updating immutable fields on statefulsets (#179)
This commit is contained in:
45
README.md
45
README.md
@@ -14,6 +14,8 @@ helm repo update
|
||||
helm upgrade -i your-qdrant-installation-name qdrant/qdrant
|
||||
```
|
||||
|
||||
For more in-depth usage documentation, see [the helm chart's README](charts/qdrant/README.md).
|
||||
|
||||
## Upgrading
|
||||
|
||||
This helm chart installs the latest version of Qdrant by default. When a new version of Qdrant is available, upgrade the helm chart with the following commands:
|
||||
@@ -32,49 +34,6 @@ image:
|
||||
tag: v1.9.0
|
||||
```
|
||||
|
||||
## Restoring from Snapshots
|
||||
|
||||
This helm chart allows you to restore a snapshot into your Qdrant cluster either from an internal or external PersistentVolumeClaim.
|
||||
|
||||
### Restoring from the built-in PVC
|
||||
|
||||
If you have set `snapshotPersistence.enabled: true` (recommended for production), this helm chart will create a separate PersistentVolume for snapshots, and any snapshots you create will be stored in that PersistentVolume.
|
||||
|
||||
To restore from one of these snapshots, set the following values:
|
||||
|
||||
```yaml
|
||||
snapshotRestoration:
|
||||
enabled: true
|
||||
# Set blank to indicate we are not using an external PVC
|
||||
pvcName: ""
|
||||
snapshots:
|
||||
- /qdrant/snapshots/<collection_name>/<filename>/:<collection_name>
|
||||
```
|
||||
|
||||
And run "helm upgrade". This will restart your cluster and restore the specified collection from the snapshot. Qdrant will refuse to overwrite an existing collection, so ensure the collection is deleted before restoring.
|
||||
|
||||
After the snapshot is restored, remove the above values and run "helm upgrade" again to trigger another rolling restart. Otherwise, the snapshot restore will be attempted again if your cluster ever restarts.
|
||||
|
||||
### Restoring from an external PVC
|
||||
|
||||
If you wish to restore from an externally-created snapshot, using the API is recommended: https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/recover_from_uploaded_snapshot
|
||||
|
||||
If the file is too large, you can separatly create a PersistentVolumeClaim, store your data in there, and refer to this separate PersistentVolumeClaim in this helm chart.
|
||||
|
||||
Once you have created this PersistentVolumeClaim (must be in the same namespace as your Qdrant cluster), set the following values:
|
||||
|
||||
```
|
||||
snapshotRestoration:
|
||||
enabled: true
|
||||
pvcName: "<the name of your PVC>"
|
||||
snapshots:
|
||||
- /qdrant/snapshots/<collection_name>/<filename>/:<collection_name>
|
||||
```
|
||||
|
||||
And run "helm upgrade". This will restart your cluster and restore the specified collection from the snapshot. Qdrant will refuse to overwrite an existing collection, so ensure the collection is deleted before restoring.
|
||||
|
||||
After the snapshot is restored, remove the above values and run "helm upgrade" again to trigger another rolling restart. Otherwise, the snapshot restore will be attempted again if your cluster ever restarts.
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](./CONTRIBUTING.md).
|
||||
|
||||
@@ -58,16 +58,84 @@ Increase the number of replicas to the desired number of nodes and set `config.c
|
||||
Depending on your environment or cloud provider you might need to change the service in the `values.yaml` as well.
|
||||
For example on AWS EKS you would need to change the `cluster.type` to `NodePort`.
|
||||
|
||||
### Snapshot Restoration
|
||||
## Updating StatefulSets
|
||||
|
||||
Disclaimer: Snapshot restoration is only supported for single qdrant node setups
|
||||
This Helm chart uses a Kubernetes [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) to manage your Qdrant cluster. StatefulSets have many fields that are immutable, meaning that you cannot change these fields without deleting and recreating the StatefulSet. If you try to change these fields, you will get an error like this:
|
||||
|
||||
To restore a snapshot create a Persistent Volume and a Persistent Volume Claim using a storage class according to your setup, copy the snapshots to the PV, enable snapshot restoration along with the snapshot file names and pvc name in values.yaml file and run the helm install command.
|
||||
```
|
||||
Error: UPGRADE FAILED: cannot patch "qdrant" with kind StatefulSet: StatefulSet.apps "qdrant" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
|
||||
```
|
||||
|
||||
Example EBS pv, pvc and volume creation command is added in examples directory
|
||||
Note: Make sure volume is on the same region and availability zone as where qdrant is going to be installed.
|
||||
If you need to change any immutable field, the process is described below, using the most common example of expanding a PVC volume.
|
||||
|
||||
### Metrics endpoints
|
||||
1. Delete the StatefulSet while leaving the Pods running:
|
||||
```
|
||||
kubectl delete statefulset --cascade=orphan qdrant
|
||||
```
|
||||
|
||||
2. Manually edit all PersistentVolumeClaims to increase their sizes:
|
||||
|
||||
```
|
||||
# For each PersistentVolumeClaim:
|
||||
kubectl edit pvc qdrant-storage-qdrant-0
|
||||
```
|
||||
|
||||
3. Update your Helm values to match the new PVC size.
|
||||
4. Reinstall the Helm chart using your updated values:
|
||||
```
|
||||
helm upgrade --install qdrant qdrant/qdrant -f my-values.yaml
|
||||
```
|
||||
|
||||
Some storage providers allow resizing volumes in-place, but most require a pod restart before the new size will take effect:
|
||||
|
||||
```
|
||||
kubectl rollout restart statefulset qdrant
|
||||
```
|
||||
|
||||
## Restoring from Snapshots
|
||||
|
||||
This helm chart allows you to restore a snapshot into your Qdrant cluster either from an internal or external PersistentVolumeClaim.
|
||||
|
||||
### Restoring from the built-in PVC
|
||||
|
||||
If you have set `snapshotPersistence.enabled: true` (recommended for production), this helm chart will create a separate PersistentVolume for snapshots, and any snapshots you create will be stored in that PersistentVolume.
|
||||
|
||||
To restore from one of these snapshots, set the following values:
|
||||
|
||||
```yaml
|
||||
snapshotRestoration:
|
||||
enabled: true
|
||||
# Set blank to indicate we are not using an external PVC
|
||||
pvcName: ""
|
||||
snapshots:
|
||||
- /qdrant/snapshots/<collection_name>/<filename>/:<collection_name>
|
||||
```
|
||||
|
||||
And run "helm upgrade". This will restart your cluster and restore the specified collection from the snapshot. Qdrant will refuse to overwrite an existing collection, so ensure the collection is deleted before restoring.
|
||||
|
||||
After the snapshot is restored, remove the above values and run "helm upgrade" again to trigger another rolling restart. Otherwise, the snapshot restore will be attempted again if your cluster ever restarts.
|
||||
|
||||
### Restoring from an external PVC
|
||||
|
||||
If you wish to restore from an externally-created snapshot, using the API is recommended: https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/recover_from_uploaded_snapshot
|
||||
|
||||
If the file is too large, you can separatly create a PersistentVolumeClaim, store your data in there, and refer to this separate PersistentVolumeClaim in this helm chart.
|
||||
|
||||
Once you have created this PersistentVolumeClaim (must be in the same namespace as your Qdrant cluster), set the following values:
|
||||
|
||||
```
|
||||
snapshotRestoration:
|
||||
enabled: true
|
||||
pvcName: "<the name of your PVC>"
|
||||
snapshots:
|
||||
- /qdrant/snapshots/<collection_name>/<filename>/:<collection_name>
|
||||
```
|
||||
|
||||
And run "helm upgrade". This will restart your cluster and restore the specified collection from the snapshot. Qdrant will refuse to overwrite an existing collection, so ensure the collection is deleted before restoring.
|
||||
|
||||
After the snapshot is restored, remove the above values and run "helm upgrade" again to trigger another rolling restart. Otherwise, the snapshot restore will be attempted again if your cluster ever restarts.
|
||||
|
||||
## Metrics endpoints
|
||||
|
||||
Metrics are available through rest api (default port set to 6333) at `/metrics`
|
||||
|
||||
|
||||
@@ -199,6 +199,9 @@ serviceAccount:
|
||||
|
||||
priorityClassName: ""
|
||||
|
||||
# We disourage changing this setting. Using the "OrderedReady" policy in a
|
||||
# multi-node cluster will cause a deadlock where nodes refuse to become
|
||||
# "Ready" until all nodes are running.
|
||||
podManagementPolicy: Parallel
|
||||
|
||||
podDisruptionBudget:
|
||||
|
||||
Reference in New Issue
Block a user