Kubernetes

ClusterIP: None and failing pods

懵懂的女人 提交于 2021-02-10 16:48:54
问题 I have an NGINX in front of several PODs, exposed through ClusterIP: none. NGINX is forwarding traffic to these nodes like that: upstream api { server my-api:1066; } Will this configuration distribute traffic evenly among all PODs behind the my-api hostname? Will failing PODs be removed from the hostname resolution? 回答1: The default traffic distribution for Kubernetes services is random based on the default proxy mode: iptables. (This is likely your case) In very old Kubernetes versions (<1.1

ClusterIP: None and failing pods

故事扮演 提交于 2021-02-10 16:48:10
问题 I have an NGINX in front of several PODs, exposed through ClusterIP: none. NGINX is forwarding traffic to these nodes like that: upstream api { server my-api:1066; } Will this configuration distribute traffic evenly among all PODs behind the my-api hostname? Will failing PODs be removed from the hostname resolution? 回答1: The default traffic distribution for Kubernetes services is random based on the default proxy mode: iptables. (This is likely your case) In very old Kubernetes versions (<1.1

Kubernetes metrics server not running

旧巷老猫 提交于 2021-02-10 15:51:14
问题 I have install metrics server on my local k8s cluster on VirtualBox using https://github.com/kubernetes-sigs/metrics-server#installation But the metrics server pod is in metrics-server-844d9574cf-bxdk7 0/1 CrashLoopBackOff 28 12h 10.46.0.1 kubenode02 <none> <none> Events from the pod describe Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned kube-system/metrics-server-844d9574cf-bxdk7 to kubenode02 Normal Created 12h (x3 over

Kubernetes metrics server not running

微笑、不失礼 提交于 2021-02-10 15:50:11
问题 I have install metrics server on my local k8s cluster on VirtualBox using https://github.com/kubernetes-sigs/metrics-server#installation But the metrics server pod is in metrics-server-844d9574cf-bxdk7 0/1 CrashLoopBackOff 28 12h 10.46.0.1 kubenode02 <none> <none> Events from the pod describe Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned kube-system/metrics-server-844d9574cf-bxdk7 to kubenode02 Normal Created 12h (x3 over

Expose Kubernetes cluster to Internet

梦想的初衷 提交于 2021-02-10 15:13:14
问题 I have created a Kubernetes cluster on my virtual machine and I have been trying to expose this to Internet with my own domain(for eg, www.mydomain.xyz). I have created an ingress resource as below and I've also modified kubelet configuration to have my domain name. All my pods and services are created in this domain name (Eg, default.svc.mydomain.xyz) root@master-1:~# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE test-ingress <none> www.mydomain.xyz 192.168.5.11 80 5d20h root@master

Kube-dns always in pending state

孤街醉人 提交于 2021-02-10 15:12:50
问题 I have deployed kubernetes on a virt-manager vm following this link https://kubernetes.io/docs/setup/independent/install-kubeadm/ When i join my another vm to the cluster i find that the kube-dns is in pending state. root@ubuntu1:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-ubuntu1 1/1 Running 0 7m kube-system kube-apiserver-ubuntu1 1/1 Running 0 8m kube-system kube-controller-manager-ubuntu1 1/1 Running 0 8m kube-system kube-dns-86f4d74b45

Expose Kubernetes cluster to Internet

回眸只為那壹抹淺笑 提交于 2021-02-10 15:11:54
问题 I have created a Kubernetes cluster on my virtual machine and I have been trying to expose this to Internet with my own domain(for eg, www.mydomain.xyz). I have created an ingress resource as below and I've also modified kubelet configuration to have my domain name. All my pods and services are created in this domain name (Eg, default.svc.mydomain.xyz) root@master-1:~# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE test-ingress <none> www.mydomain.xyz 192.168.5.11 80 5d20h root@master

Unable to setup couchbase operator 1.2 with persistent volume on local storage class

做~自己de王妃 提交于 2021-02-10 14:50:38
问题 I am trying to setup couchbase operator 1.2 on my local system. i followed the following steps : Install the Couchbase Admission Controller. Deploy the Couchbase Autonomous Operator. Deploy the Couchbase Cluster. Access CouchBase from UI. But the problem with this is that as soon as the system or docker resets or the pod resets, the cluster's data is lost. So for the same I tried to do it by adding persistent volume with local storage class as mentioned in the docs but the result was still

Kubernetes pod cannot mount iSCSI volume: failed to get any path for iscsi disk

情到浓时终转凉″ 提交于 2021-02-10 14:43:39
问题 I would like to add an iSCSI volume to a pod as in this this example. I have already prepared an iSCSI target on a Debian server and installed open-iscsi on all my worker nodes. I have also confirmed that I can mount the iSCSI target on a worker node with command line tools (i.e. still outside Kubernetes). This works fine. For simplicity, there is no authentication (CHAP) in play yet, and there is already a ext4 file system present on the target. I would now like for Kubernetes 1.14 to mount

Kubernetes pod cannot mount iSCSI volume: failed to get any path for iscsi disk

自闭症网瘾萝莉.ら 提交于 2021-02-10 14:42:41
问题 I would like to add an iSCSI volume to a pod as in this this example. I have already prepared an iSCSI target on a Debian server and installed open-iscsi on all my worker nodes. I have also confirmed that I can mount the iSCSI target on a worker node with command line tools (i.e. still outside Kubernetes). This works fine. For simplicity, there is no authentication (CHAP) in play yet, and there is already a ext4 file system present on the target. I would now like for Kubernetes 1.14 to mount