How does Kubernetes track which cloud disk is attached to which Pod in a StatefulSet?

浪尽此生 提交于 2019-12-05 21:30:50

When you scale the StatefulSet to 0 replicas, the pods get destroyed but the persistent volumes and persistent volume claims are kept. The association with the GCE disk is written inside the PersistentVolume object. When you scale the RS up again, pods are assigned to the correct PV and thus get the same volume from GCE.

In order to change the persistent volume - GCE disk association after a snapshot restore, you need to edit the PV object.

Kubernetes 1.12 start addressing this issue in a more generalized way with Snapshot / restore functionality for Kubernetes and CSI (Container Storage Interface), introduced as an alpha feature.
This provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers.

This is no longer specific to GKE.

See the feature request "Snapshot / Restore Volume Support for Kubernetes (CRD + External Controller)" and its associated CSI snapshot design.

The statefulSet aspect was not yet fully addressed in this at the beta level, but will be in the future:

The following are non-goals for the current phase, but will be considered at a later phase.

Goal 5: Provide higher-level management, such as backing up and restoring a pod and statefulSet, and creating a consistent group of snapshots.

See PR for the documentation: "Volume Snapshot and Restore Volume from Snapshot Support"

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!