kube-dns

Kube-dns always in pending state

孤街醉人 提交于 2021-02-10 15:12:50
问题 I have deployed kubernetes on a virt-manager vm following this link https://kubernetes.io/docs/setup/independent/install-kubeadm/ When i join my another vm to the cluster i find that the kube-dns is in pending state. root@ubuntu1:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-ubuntu1 1/1 Running 0 7m kube-system kube-apiserver-ubuntu1 1/1 Running 0 8m kube-system kube-controller-manager-ubuntu1 1/1 Running 0 8m kube-system kube-dns-86f4d74b45

How to disable cross communicate between pods which are in 2 different namespaces in kubernetes

余生长醉 提交于 2021-02-10 14:23:30
问题 I have 2 namespaces and 1 pod, 1 service running in each. Example Namespace 1: default Pod: pod1 Service: pod1service Namespace 2: test Pod: pod1 Service: pod1service I can actually make HTTP request from namespace2 pod to namespace1 pod. curl -H "Content-Type: application/json" -X GET http://pod1service.default.svc.cluster.local/some/api How do i disable communication between 2 differet namespaces? 回答1: You need to configure network policies. For that to work you also need to use a network

How to obtain Ip address of a kubernetes pod by querying DNS srv records?

ぃ、小莉子 提交于 2021-02-08 08:46:19
问题 I am trying to create a kubernetes job inside which I will run "dig srv" queries to find out the IP address of all the pods for any specific service running on the same cluster. Is this achievable ? I would like to elaborate a little more on the problem statement. There are a few services already running on the cluster. The requirement is to have a tool that can accept a service name and list down the IP addresses of all the pods belonging to that service. I was able to do this by using

Multi-broker Kafka on Kubernetes how to set KAFKA_ADVERTISED_HOST_NAME

。_饼干妹妹 提交于 2020-12-08 08:00:11
问题 My current Kafka deployment file with 3 Kafka brokers looks like this: apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: kafka spec: selector: matchLabels: app: kafka serviceName: kafka-headless replicas: 3 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: app: kafka spec: containers: - name: kafka-instance image: wurstmeister/kafka ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST

Multi-broker Kafka on Kubernetes how to set KAFKA_ADVERTISED_HOST_NAME

谁都会走 提交于 2020-12-08 07:59:28
问题 My current Kafka deployment file with 3 Kafka brokers looks like this: apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: kafka spec: selector: matchLabels: app: kafka serviceName: kafka-headless replicas: 3 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: app: kafka spec: containers: - name: kafka-instance image: wurstmeister/kafka ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST

Multi-broker Kafka on Kubernetes how to set KAFKA_ADVERTISED_HOST_NAME

若如初见. 提交于 2020-12-08 07:59:11
问题 My current Kafka deployment file with 3 Kafka brokers looks like this: apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: kafka spec: selector: matchLabels: app: kafka serviceName: kafka-headless replicas: 3 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: app: kafka spec: containers: - name: kafka-instance image: wurstmeister/kafka ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST

Kubernetes - Granting RBAC access to anonymous users in kube dns

心不动则不痛 提交于 2020-05-30 09:56:39
问题 I have Kubernetes Cluster setup with a master and worker node. Kubectl cluster-info shows kubernetes-master as well as kube-dns running successfully. I am trying to access below URL and since it is internal to my organization, below URL is not visible to external world. https://10.118.3.22:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy But I am getting below error when I access it - { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message":

Kubernetes: ERR_NAME_NOT_RESOLVED

戏子无情 提交于 2020-05-13 07:40:08
问题 I have deployed a mongo db, Spring Boot BE, Angular app within GKE. My FE service is a load balancer, it needs to connect with my BE to get data but I'm getting an console error in my browser: GET http://contactbe.default.svc.cluster.local/contacts net::ERR_NAME_NOT_RESOLVED . My FE needs to consume /contacts endpoint to get data. I'm using DNS from my BE service (contactbe.default.svc.cluster.local) within my Angular app. It is my yml file that I used to create my deployment: apiVersion: v1