问题
I am trying to configure my Kubernetes cluster to use a local NFS server for persistent volumes.
I set up the PersistentVolume as follows:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: hq-storage-u4
  namespace: my-ns
spec:
  capacity:
    storage: 10Ti
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/u4
    server: 10.30.136.79
    readOnly: false
The PV looks OK in kubectl
$ kubectl get pv
NAME            CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM             STORAGECLASS   REASON    AGE
hq-storage-u4   10Ti       RWX           Retain          Released   my-ns/pv-50g                               49m
I then try to create the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-50gb
  namespace: my-ns
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 5Gi
Kubectl shows the pvc status is Pending
$ kubectl get pvc
NAME       STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
pvc-50gb   Pending                                                     16m
When I try to add the volume to a deployment, I get the error:
[SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected.]
How to I get the pvc to a working state?
回答1:
It turned out that I needed to put the IP (I also put the path) in quotes. After fixing that, the pvc goes to status Bound, and the pod can mount correctly.
回答2:
I can't comment on your post so I'll just attempt to answer this.
I've encountered 2 kinds of errors when PVCs don't work on my NFS cluster. Installing a PV usually succeed, so the status message provided doesn't say much.
- The annotation and spec of the PV and the PVC are dissimilar. This doesn't look like the case.
 - The node of the pod that uses the NFS resource cannot mount the resource. Try 
mount -t nfs 10.30.136.79:/data/u4 /mnton the node that is supposed to mount the NFS resource. This should succeed. If this fails, it could be- The lack of mount permissions. Rectify 
/etc/exportsin your NFS server. - A firewall blocking the NFS ports. Fix the firewall.
 
 - The lack of mount permissions. Rectify 
 
One more thing, a non-privileged user in the pod might have trouble writing to the NFS resource. The uid/gid of the NFS user in the pod must match the perms of the NFS resource.
Bonne chance!
来源:https://stackoverflow.com/questions/44556363/kubernetes-nfs-persistentvolumeclaim-has-status-pending