kubernetes dashboard showing http: proxy error: dial tcp [::1]:8080: connect: connection refused

三世轮回 提交于 2021-01-28 18:28:21

问题


I installed kubeadm to deploy multi node kubernetes cluster. Added two nodes. Those are ready. I am able to run my app using node port service. While i am trying yo access the dashboard facing an issue. I am following the steps to install dashboard in this link

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

dash-admin.yaml:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
   name: kubernetes-dashboard
   labels:
     k8s-app: kubernetes-dashboard
roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
subjects:
- kind: ServiceAccount
   name: kubernetes-dashboard
   namespace: kube-system

kubectl create -f dashboard-admin.yaml

nohup kubectl proxy --address="172.20.22.101" -p 443 --accept-hosts='^*$' &

Its running well and saving the output in nohup.out

When i try to access the site using the url: 172.20.22.101:443/api/v1/namespaces/kube-system/services/…. it's showing connection refused. I observed the output in the nohup.out, it's showing the below error:

I1203 12:28:05.880828 15591 log.go:172] http: proxy error: dial tcp [::1]:8080: connect: connection refused –


回答1:


You are not running it with root or sudo permission.

I have encountered this issue, and after running using root. I was able to access it with no errors.




回答2:


log.go:172] http: proxy error: dial tcp [::1]:8080: connect: connection refused –

In case anyone comes across the above issue, know that this error occurs when one tries to hit Kubernetes API without proper permission.

Note: Its nothing to do with RBAC.

To solve this issue, I took below steps

  1. Check access. Execute as root.
  2. In case you are using kubectl proxy to connect to Kubernetes API, make sure kubeconfig file is properly configured. or try kubectl proxy --kubeconfig=/path/to/dashboard-user.kubeconfig



回答3:


I had similar problem this days and root cause of problem is that in cluster deployment (3 nodes), kuberenetes dashboard pod was up on the slave (non master) node. Issue is that proxy is provided only locally (security reasons), so dashboard console could not be started on master, neither node 3!

On master node browser error (kubectl proxy & executed on this node):

"http: proxy error: dial tcp 10.32.0.2:8001: connect: connection refused"

On slave node error (kubectl proxy & executed on this node):

 "http: proxy error: dial tcp [::1]:8080: connect: connection refused"

Solution:

Status of the cluster pods showed that dashboard pod kubernetes-dashboard-7b544877d5-lj4xq
was on node 3:

namespace kubernetes-dashboard
pod kubernetes-dashboard-7b544877d5-lj4xq
node pb-kn-node03

[root@PB-KN-Node01 ~]# kubectl get pods --all-namespaces -o wide|more <br>
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES <br>
kube-system            coredns-66bff467f8-ph7cc                     1/1     Running   1          3d17h   10.32.0.3      pb-kn-node01   <none>           <none> <br>
kube-system            coredns-66bff467f8-x22cv                     1/1     Running   1          3d17h   10.32.0.2      pb-kn-node01   <none>           <none> <br>
kube-system            etcd-pb-kn-node01                            1/1     Running   2        3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-apiserver-pb-kn-node01                  1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-controller-manager-pb-kn-node01         1/1     Running   3          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-proxy-4ngd2                             1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-proxy-7qvbj                             1/1     Running   0          3d12h   10.13.40.202   pb-kn-node02   <none>           <none> <br>
kube-system            kube-proxy-fgrcp                             1/1     Running   0          3d12h   10.13.40.203   pb-kn-node03   <none>           <none> <br>
kube-system            kube-scheduler-pb-kn-node01                  1/1     Running   3          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            weave-net-fm2kd                              2/2     Running   5          3d12h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            weave-net-l6rmw                              2/2     Running   1          3d12h   10.13.40.203   pb-kn-node03   <none>           <none> <br>
kube-system            weave-net-r56xk                              2/2     Running   1          3d12h   10.13.40.202   pb-kn-node02   <none>           <none> <br>
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-v2gqp   1/1     Running   0          2d22h   10.40.0.1      pb-kn-node02   <none>           <none> 
kubernetes-dashboard   kubernetes-dashboard-7b544877d5-lj4xq        1/1     Running   15         2d22h   10.32.0.2      pb-kn-node03   <none>           <none>

So all active global pods are reallocated from node 3 (included dashboard) to master node after draining the node

[root@PB-KN-Node01 ~]# kubectl drain --delete-local-data --ignore-daemonsets pb-kn-node03
node/pb-kn-node03 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-fgrcp, kube-system/weave-net-l6rmw
node/pb-kn-node03 drained

After 2 miniutes ...

[root@PB-KN-Node01 ~]# kubectl get pods --all-namespaces -o wide|more <br>
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES <br>
kube-system            coredns-66bff467f8-ph7cc                     1/1     Running   1          3d17h   10.32.0.3      pb-kn-node01   <none>           <none> <br>
kube-system            coredns-66bff467f8-x22cv                     1/1     Running   1          3d17h   10.32.0.2      pb-kn-node01   <none>           <none> <br>
kube-system            etcd-pb-kn-node01                            1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-apiserver-pb-kn-node01                  1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-controller-manager-pb-kn-node01         1/1     Running   3          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-proxy-4ngd2                             1/1     Running   2          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            kube-proxy-7qvbj                             1/1     Running   0          3d12h   10.13.40.202   pb-kn-node02   <none>           <none> <br>
kube-system            kube-proxy-fgrcp                             1/1     Running   0          3d12h   10.13.40.203   pb-kn-node03   <none>           <none> <br>
kube-system            kube-scheduler-pb-kn-node01                  1/1     Running   3          3d17h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            weave-net-fm2kd                              2/2     Running   5          3d12h   10.13.40.201   pb-kn-node01   <none>           <none> <br>
kube-system            weave-net-l6rmw                              2/2     Running   1          3d12h   10.13.40.203   pb-kn-node03   <none>           <none> <br>
kube-system            weave-net-r56xk                              2/2     Running   1          3d12h   10.13.40.202   pb-kn-node02   <none>           <none> 
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-v2gqp   1/1     Running   0          2d22h   10.40.0.1      pb-kn-node02   <none>           <none> <br>
<b>kubernetes-dashboard   kubernetes-dashboard-7b544877d5-8ln2n        1/1     Running   0          89s     10.32.0.4      pb-kn-node01   <none>           <none> </b><br>

And the problem was solved, kubernetes dashboard was available on master node.



来源:https://stackoverflow.com/questions/53590248/kubernetes-dashboard-showing-http-proxy-error-dial-tcp-18080-connect-co

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!