I\'m running a MySQL deployment on Kubernetes however seems like my allocated space was not enough, initially I added a persistent volume of 50GB and now I\'d l
There is some support for this in 1.8 and above, for some volume types, including gcePersistentDisk and awsBlockStore, if certain experimental features are enabled on the cluster.
For other volume types, it must be done manually for now. In addition, support for doing this automatically while pods are online (nice!) is coming in a future version (currently slated for 1.11):
For now, these are the steps I followed to do this manually with an AzureDisk volume type (for managed disks) which currently does not support persistent disk resize (but support is coming for this too):
Bound. Take special care for stateful sets that are managed by an operator, such as Prometheus -- the operator may need to be disabled temporarily. It may also be possible to use Scale to do one pod at a time. This may take a few minutes, be patient.e2fsck and resize2fs to resize the filesystem on the PV (assuming an ext3/4 FS). Unmount the disks.Released.Available:
spec.capacity.storage,spec.claimref uid and resourceVersion fields, andstatus.phase.metadata.resourceVersion field,pv.kubernetes.io/bind-completed and pv.kubernetes.io/bound-by-controller annotations, andspec.resources.requests.storage field to the updated PV size, andstatus.Pending state, but both the PV and PVC should transition relatively quickly to Bound.