How can you reuse dynamically provisioned PersistentVolumes with Helm on GKE?

寵の児 提交于 2019-12-04 13:42:33

问题


I am trying to deploy a helm chart which uses PersistentVolumeClaim and StorageClass to dynamically provision the required sotrage. This works as expected, but I can't find any configuration which allows a workflow like

helm delete xxx

# Make some changes and repackage chart

helm install --replace xxx

I don't want to run the release constantly, and I want to reuse the storage in deployments in the future.

Setting the storage class to reclaimPolicy: Retain keeps the disks, but helm will delete the PVC and orphan them. Annotating the PVC's so that helm does not delete them fixes this problem, but then running install causes the error

Error: release xxx failed: persistentvolumeclaims "xxx-xxx-storage" already exists

I think I have misunderstood something fundamental to managing releases in helm. Perhaps the volumes should not be created in the chart at all.


回答1:


PersistenVolumeClain creating just a mapping between your actual PersistentVolume and your pod.

Using "helm.sh/resource-policy": keep annotation for PV is not the best idea, because of that remark in a documentation:

The annotation "helm.sh/resource-policy": keep instructs Tiller to skip this resource during a helm delete operation. However, this resource becomes orphaned. Helm will no longer manage it in any way. This can lead to problems if using helm install --replace on a release that has already been deleted, but has kept resources.

If you will create a PV manually after you will delete your release, Helm will remove PVC, which will be marked as "Available" and on next deployment, it will reuse it. Actually, you don't need to keep your PVC in the cluster to keep your data. But, for making it always using the same PV, you need to use labels and selectors.

For keep and reuse volumes you can:

  1. Create PersistenVolume with the label, as an example, for_app=my-app and set "Retain" policy for that volume like this:

apiVersion: v1 kind: PersistentVolume metadata: name: myappvolume namespace: my-app labels: for_app: my-app spec: persistentVolumeReclaimPolicy: Retain capacity: storage: 5Gi accessModes: - ReadWriteOnce

  1. Modify your PersistenVolumeClaim configuration in Helm. You need to add a selector for using only PersistenVolumes with a label for_app=my-app.

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myappvolumeclaim namespace: my-app spec: selector: matchLabels: for_app: my-app accessModes: - ReadWriteOnce resources: requests: storage: 5Gi

So, now your application will use the same volume each time when it started.

But, please keep in mind, you may need to use selectors for other apps in the same namespace for preventing using your PV by them.




回答2:


Actually, I'd suggest using StateFul sets and VolumeClaimTemplates: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

The example there should speak for itself..



来源:https://stackoverflow.com/questions/49344501/how-can-you-reuse-dynamically-provisioned-persistentvolumes-with-helm-on-gke

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!