When I provision a Kubernetes cluster using kubeadm and I get my nodes tagged as none. It\'s a know bug in Kubernetes and currently a PR is in-progress. However, I would lik
Before label:
general@master-node:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready master 23m v1.18.2
slave-node Ready 19m v1.18.2
kubectl label nodeskubernetes.io/role= In my case slave-node e.g.
kubectl label nodes slave-node kubernetes.io/role=worker
After label:
general@master-node:~$ kubectl label nodes slave-node kubernetes.io/role=worker
node/slave-node labeled
general@master-node:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready master 24m v1.18.2
slave-node Ready worker 21m v1.18.2
You can also change the label. Just put --overwrite
kubectl label --overwrite nodeskubernetes.io/role= e.g.
kubectl label --overwrite nodes slave-node kubernetes.io/role=worker1
After overwriting the label:
general@master-node:~$ kubectl label --overwrite nodes slave-node kubernetes.io/role=worker1
node/slave-node labeled
general@master-node:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready master 36m v1.18.2
slave-node Ready worker1 32m v1.18.2