Kube-dns always in pending state

孤街醉人 提交于 2021-02-10 15:12:50

问题


I have deployed kubernetes on a virt-manager vm following this link

https://kubernetes.io/docs/setup/independent/install-kubeadm/

When i join my another vm to the cluster i find that the kube-dns is in pending state.

root@ubuntu1:~# kubectl get pods --all-namespaces 
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-ubuntu1                      1/1       Running   0          7m
kube-system   kube-apiserver-ubuntu1            1/1       Running   0          8m
kube-system   kube-controller-manager-ubuntu1   1/1       Running   0          8m
kube-system   kube-dns-86f4d74b45-br6ck         0/3       Pending   0          8m
kube-system   kube-proxy-sh9lg                  1/1       Running   0          8m
kube-system   kube-proxy-zwdt5                  1/1       Running   0          7m
kube-system   kube-scheduler-ubuntu1            1/1       Running   0          8m


root@ubuntu1:~# kubectl --namespace=kube-system describe pod kube-dns-86f4d74b45-br6ck
Name:           kube-dns-86f4d74b45-br6ck
Namespace:      kube-system
Node:           <none>
Labels:         k8s-app=kube-dns
                pod-template-hash=4290830601
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/kube-dns-86f4d74b45
Containers:
  kubedns:
    Image:       k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
    Ports:       10053/UDP, 10053/TCP, 10055/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    Limits:
      memory:  170Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro)
  dnsmasq:
    Image:       k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    Ports:       53/UDP, 53/TCP
    Host Ports:  0/UDP, 0/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --no-negcache
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro)
  sidecar:
    Image:      k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
    Port:       10054/TCP
    Host Port:  0/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-dns-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-dns
    Optional:  true
  kube-dns-token-4fjt4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-dns-token-4fjt4
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  6m (x7 over 7m)   default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3s (x19 over 6m)  default-scheduler  0/2 nodes are available: 2 node(s) were not ready.

Can anyone just help me how to deconstruct this and find the actual issue??

Any help would be off great use

Thanks in advance.


回答1:


in your cause kubectl get pods --all-namespaces output cannot see any about pods network.

so you may choice a network implementation and have to install a Pod Network before then kube-dns may deployed fully. for detail kube-dns is stuck in the Pending state and install pod network solution




回答2:


In addition to what @justcompile has wrote you will need a minimum of 2 CPU cores in order to run all pods from the kube-system namespace without issues.

You need to verify how much resources you have on that box and compare it with CPU reservations which each of Pods make.

For example in the provided by you output I can see that your DNS service tries to make a reservetion for 10% of CPU core:

Requests:
  cpu:      100m

You can check each of deployed pods and their CPU reservations using:

kubectl describe pods --namespace=kube-system



回答3:


Firstly, if you run kubectl get nodes does this show both/all nodes in a Ready state?

If they are, I faced this problem and found that when inspecting kubectl get events it showed that the pods were failing as they required a minimum of 2 CPUs to run.

As I was initially running this on an old Macbook Pro via VirtualBox I had to give up and use AWS (other Cloud Platforms are of course available) in order to get multiple CPUs per node.



来源:https://stackoverflow.com/questions/49555137/kube-dns-always-in-pending-state

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!