Volume is already attached by pod

那年仲夏 提交于 2021-02-20 04:50:31

问题


I install kubernetes on ubuntu on baremetal. I deploy 1 master and 3 worker. and then deploy rook and every thing work fine.but when i want to deploy a wordpress on it ,it stuck in container creating and then i delete wordpress and now i got this error

Volume is already attached by pod default/wordpress-mysql-b78774f44-gvr58. Status Running

#kubectl describe pods wordpress-mysql-b78774f44-bjc2c

Events:
  Type     Reason       Age                    From               Message
  ----     ------       ----                   ----               -------
  Normal   Scheduled    3m21s                  default-scheduler  Successfully assigned default/wordpress-mysql-b78774f44-bjc2c to worker2
  Warning  FailedMount  2m57s (x6 over 3m16s)  kubelet, worker2   MountVolume.SetUp failed for volume "pvc-dcba7817-553b-11e9-a229-52540076d16c" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume pvc-dcba7817-553b-11e9-a229-52540076d16c for pod default/wordpress-mysql-b78774f44-bjc2c. Volume is already attached by pod default/wordpress-mysql-b78774f44-gvr58. Status Running
  Normal   Pulling      2m26s                  kubelet, worker2   Pulling image "mysql:5.6"
  Normal   Pulled       110s                   kubelet, worker2   Successfully pulled image "mysql:5.6"
  Normal   Created      106s                   kubelet, worker2   Created container mysql
  Normal   Started      101s                   kubelet, worker2   Started container mysql

   for more information
# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS      REASON   AGE
pvc-dcba7817-553b-11e9-a229-52540076d16c   20Gi       RWO            Delete           Bound    default/mysql-pv-claim   rook-ceph-block            13m
pvc-e9797517-553b-11e9-a229-52540076d16c   20Gi       RWO            Delete           Bound    default/wp-pv-claim      rook-ceph-block            13m
#kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-dcba7817-553b-11e9-a229-52540076d16c   20Gi       RWO            rook-ceph-block   15m
wp-pv-claim      Bound    pvc-e9797517-553b-11e9-a229-52540076d16c   20Gi       RWO            rook-ceph-block   14m

#kubectl get pods --all-namespaces
NAMESPACE          NAME                                  READY   STATUS      RESTARTS   AGE
default            wordpress-595685cc49-sdbfk            1/1     Running     6          9m58s
default            wordpress-mysql-b78774f44-bjc2c       1/1     Running     0          8m14s
kube-system        coredns-fb8b8dccf-plnt4               1/1     Running     0          46m
kube-system        coredns-fb8b8dccf-xrkql               1/1     Running     0          47m
kube-system        etcd-master                           1/1     Running     0          46m
kube-system        kube-apiserver-master                 1/1     Running     0          46m
kube-system        kube-controller-manager-master        1/1     Running     1          46m
kube-system        kube-flannel-ds-amd64-45bsf           1/1     Running     0          40m
kube-system        kube-flannel-ds-amd64-5nxfz           1/1     Running     0          40m
kube-system        kube-flannel-ds-amd64-pnln9           1/1     Running     0          40m
kube-system        kube-flannel-ds-amd64-sg4pv           1/1     Running     0          40m
kube-system        kube-proxy-2xsrn                      1/1     Running     0          47m
kube-system        kube-proxy-mll8b                      1/1     Running     0          42m
kube-system        kube-proxy-mv5dw                      1/1     Running     0          42m
kube-system        kube-proxy-v2jww                      1/1     Running     0          42m
kube-system        kube-scheduler-master                 1/1     Running     0          46m
rook-ceph-system   rook-ceph-agent-8pbtv                 1/1     Running     0          26m
rook-ceph-system   rook-ceph-agent-hsn27                 1/1     Running     0          26m
rook-ceph-system   rook-ceph-agent-qjqqx                 1/1     Running     0          26m
rook-ceph-system   rook-ceph-operator-d97564799-9szvr    1/1     Running     0          27m
rook-ceph-system   rook-discover-26g84                   1/1     Running     0          26m
rook-ceph-system   rook-discover-hf7lc                   1/1     Running     0          26m
rook-ceph-system   rook-discover-jc72g                   1/1     Running     0          26m
rook-ceph          rook-ceph-mgr-a-68cb58b456-9rrj7      1/1     Running     0          21m
rook-ceph          rook-ceph-mon-a-6469b4c68f-cq6mj      1/1     Running     0          23m
rook-ceph          rook-ceph-mon-b-d59cfd758-2d2zt       1/1     Running     0          22m
rook-ceph          rook-ceph-mon-c-79664b789-wl4t4       1/1     Running     0          21m
rook-ceph          rook-ceph-osd-0-8778dbbc-d84mh        1/1     Running     0          19m
rook-ceph          rook-ceph-osd-1-84974b86f6-z5c6c      1/1     Running     0          19m
rook-ceph          rook-ceph-osd-2-84f9b78587-czx6d      1/1     Running     0          19m
rook-ceph          rook-ceph-osd-prepare-worker1-x4rqc   0/2     Completed   0          20m
rook-ceph          rook-ceph-osd-prepare-worker2-29jpg   0/2     Completed   0          20m
rook-ceph          rook-ceph-osd-prepare-worker3-rkp52   0/2     Completed   0          20m

回答1:


You are using a standard class storage for your PVC, and your policy will be ReadWriteOnce. This does not mean you can only connect your PVC to one pod, but only to one node.

ReadWriteOnce – the volume can be mounted as read-write by a single node

ReadWriteMany – the volume can be mounted as read-write by many nodes

Here, seems like you have 2 pods trying to mount this volume. This will be flakey unless you do one of two things:

  • Schedule both pods on the same node
  • Use other storageClasses such as NFS (FileSystem) to change policy to ReadWriteMany
  • Downscale to 1 pod, so you don't have to share the volume

Right now you have 2 pods trying to mount the same volume, default/wordpress-mysql-b78774f44-gvr58 and default/wordpress-mysql-b78774f44-bjc2c.

You can also downscale to 1 pod, so you don't have to worry about any of the above altogether:

kubectl scale deploy wordpress-mysql --replicas=1


来源:https://stackoverflow.com/questions/55474193/volume-is-already-attached-by-pod

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!