kubelet

Coredns in pending state in Kubernetes cluster

拜拜、爱过 提交于 2020-12-29 07:14:40
问题 I am trying to configure a 2 node Kubernetes cluster. First I am trying to configure the master node of the cluster on a CentOS VM. I have initialized the cluster using 'kubeadm init --apiserver-advertise-address=172.16.100.6 --pod-network-cidr=10.244.0.0/16' and deployed the flannel network to the cluster. But when I do 'kubectl get nodes', I get the following output ---- [root@kubernetus ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kubernetus NotReady master 57m v1.12.0 Following is

Coredns in pending state in Kubernetes cluster

*爱你&永不变心* 提交于 2020-12-29 07:06:30
问题 I am trying to configure a 2 node Kubernetes cluster. First I am trying to configure the master node of the cluster on a CentOS VM. I have initialized the cluster using 'kubeadm init --apiserver-advertise-address=172.16.100.6 --pod-network-cidr=10.244.0.0/16' and deployed the flannel network to the cluster. But when I do 'kubectl get nodes', I get the following output ---- [root@kubernetus ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kubernetus NotReady master 57m v1.12.0 Following is

Recreating kubelet sandbox false when reboot cluster server using kubeadm

不问归期 提交于 2020-04-17 23:35:04
问题 When I reboot the master and work node, the pod of coredns show the below error message seem that it can not recreate kubelet after server restart. Normal SandboxChanged 12s kubelet, izbp1dyjigsfwmw0dtl85gz Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 11s kubelet, izbp1dyjigsfwmw0dtl85gz Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5e850ee3e8bf86688fec2badd9b0272127a0d775620a5783e7c30b4e0d412b01"

Recreating kubelet sandbox false when reboot cluster server using kubeadm

点点圈 提交于 2020-04-17 23:33:33
问题 When I reboot the master and work node, the pod of coredns show the below error message seem that it can not recreate kubelet after server restart. Normal SandboxChanged 12s kubelet, izbp1dyjigsfwmw0dtl85gz Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 11s kubelet, izbp1dyjigsfwmw0dtl85gz Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5e850ee3e8bf86688fec2badd9b0272127a0d775620a5783e7c30b4e0d412b01"

Recreating kubelet sandbox false when reboot cluster server using kubeadm

北城以北 提交于 2020-04-17 23:33:07
问题 When I reboot the master and work node, the pod of coredns show the below error message seem that it can not recreate kubelet after server restart. Normal SandboxChanged 12s kubelet, izbp1dyjigsfwmw0dtl85gz Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 11s kubelet, izbp1dyjigsfwmw0dtl85gz Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5e850ee3e8bf86688fec2badd9b0272127a0d775620a5783e7c30b4e0d412b01"

network plugin is not ready: cni config uninitialized

梦想与她 提交于 2020-02-28 03:46:58
问题 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized I don't know how to make the network plugin ready 回答1: While you run kubectl describe node <node_name> In the Conditions table, the Ready type will contain this message if you did not initialized cni . Proper initialization can be obtained by installing network addon. I will point you to 2 most used: Weave and Flannel 1) Weave $

Kubelet - x509: certificate is valid for 10.233.0.1 not for <IP>

北城余情 提交于 2020-01-15 09:25:09
问题 I've installed my kubernetes cluster (two nodes) with kubespray. Now I have added an third node. And I get the error from kubelet server on the new node: Failed to list *v1.Service: Get https://94.130.25.248:6443/api/v1/services?limit=500&resourceVersion=0: x509: certificate is valid for 10.233.0.1, 94.130.25.247, 94.130.25.247, 10.233.0.1, 127.0.0.1, 94.130.25.247, 144.76.14.131, not 94.130.25.248 The IP 94.130.25.248 is the ip of new node. I've found this post, where was wrote about

Kubernetes Kube-proxy failed to retrieve node info

两盒软妹~` 提交于 2020-01-15 08:47:08
问题 Trying to understand why I'm seeing this output from my kube-proxy logs W0328 08:00:53.755379 1 server.go:468] Failed to retrieve node info: nodes "ip-172-31-55-175" not found W0328 08:00:53.755505 1 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP The cluster is working just fine, is that indicating an issue with the cluster configuration? 回答1: Can you please show the output of the command kubectl get node ? Probably the registered name used when kubelet starts