Kubernetes

How to obtain Ip address of a kubernetes pod by querying DNS srv records?

ぃ、小莉子 提交于 2021-02-08 08:46:19
问题 I am trying to create a kubernetes job inside which I will run "dig srv" queries to find out the IP address of all the pods for any specific service running on the same cluster. Is this achievable ? I would like to elaborate a little more on the problem statement. There are a few services already running on the cluster. The requirement is to have a tool that can accept a service name and list down the IP addresses of all the pods belonging to that service. I was able to do this by using

Kubernetes use private DNS

我只是一个虾纸丫 提交于 2021-02-08 08:43:17
问题 Is it possible to use a private DNS in Kubernetes? For example, an application needs to connect to an external DB by its hostname. The DNS entry, which resolves the IP, is deposited in a private DNS. My AKS (Azure Kubernetes Service) is running on version 1.17 which already uses the new coreDNS. My first try was to use that private DNS like on VM by configuring the /etc/resolve.conf file of the pods: dnsPolicy: "None" dnsConfig: nameservers: - 10.76.xxx.xxx - 10.76.xxx.xxx searches: - az-q

Run command on Minikube startup

你说的曾经没有我的故事 提交于 2021-02-08 08:23:32
问题 I'm using Minikube for working with Kubernetes on my local machine, and would like to run a command on the VM just after startup (preferably before the Pods start). I can run it manually with minikube ssh , but that's a bit of a pain to do after every restart, and is difficult to wrap in a script. Is there an easy way to do this? The command in my case is this, so that paths on the VM match paths on my host machine: sudo mount --bind /hosthome/<user> /home/<user> 回答1: Maybe flags which can

Replacing database connection strings in the Docker image

六眼飞鱼酱① 提交于 2021-02-08 08:18:22
问题 I'm having hard times with the application's release process. The app is being developed in .NET Core and uses 'appsettings.json' that holds connection string to a database. The app should be deployed to Kubernetes cluster in Azure. We have a build and release processes in Azure DevOps so the process is automated, although the problem belongs to a necessity to deploy the same to multiple environments (DEV/QA/UAT) and every env is using its own database. When we build Docker image, the

oauth2-proxy authentication calls slow on kubernetes cluster with auth annotations for nginx ingress

筅森魡賤 提交于 2021-02-08 07:25:19
问题 We have secured some of our services on the K8S cluster using the approach described on this page. Concretely, we have: nginx.ingress.kubernetes.io/auth-url: "https://oauth2.${var.hosted_zone}/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.${var.hosted_zone}/oauth2/start?rd=/redirect/$http_host$escaped_request_uri" set on the service to be secured and we have followed this tutorial to only have one deployment of oauth2_proxy per cluster. We have 2 proxies set up, both

create a custom resource in kubernetes using generateName field

有些话、适合烂在心里 提交于 2021-02-08 07:21:24
问题 I have a sample crd defined as crd.yaml kind: CustomResourceDefinition metadata: name: testconfig.demo.k8s.com namespace: testns spec: group: demo.k8s.com versions: - name: v1 served: true storage: true scope: Namespaced names: plural: testconfigs singular: testconfig kind: TestConfig I want to create a custom resource based on above crd but i dont want to assign a fixed name to the resource rather use the generateName field. So i generated the below cr.yaml. But when i apply it gives error

oauth2-proxy authentication calls slow on kubernetes cluster with auth annotations for nginx ingress

丶灬走出姿态 提交于 2021-02-08 07:21:20
问题 We have secured some of our services on the K8S cluster using the approach described on this page. Concretely, we have: nginx.ingress.kubernetes.io/auth-url: "https://oauth2.${var.hosted_zone}/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.${var.hosted_zone}/oauth2/start?rd=/redirect/$http_host$escaped_request_uri" set on the service to be secured and we have followed this tutorial to only have one deployment of oauth2_proxy per cluster. We have 2 proxies set up, both

create a custom resource in kubernetes using generateName field

安稳与你 提交于 2021-02-08 07:20:56
问题 I have a sample crd defined as crd.yaml kind: CustomResourceDefinition metadata: name: testconfig.demo.k8s.com namespace: testns spec: group: demo.k8s.com versions: - name: v1 served: true storage: true scope: Namespaced names: plural: testconfigs singular: testconfig kind: TestConfig I want to create a custom resource based on above crd but i dont want to assign a fixed name to the resource rather use the generateName field. So i generated the below cr.yaml. But when i apply it gives error

kubectl --token=$TOKEN doesn't run with the permissions of the token

让人想犯罪 __ 提交于 2021-02-08 06:59:24
问题 When I am using the command kubectl with the --token flag and specify a token, it still uses the administrator credentials from the kubeconfig file. This is what I did: NAMESPACE="default" SERVICE_ACCOUNT_NAME="sa1" kubectl create sa $SERVICE_ACCOUNT_NAME kubectl create clusterrolebinding list-pod-clusterrolebinding \ --clusterrole=list-pod-clusterrole \ --serviceaccount="$NAMESPACE":"$SERVICE_ACCOUNT_NAME" kubectl create clusterrole list-pod-clusterrole \ --verb=list \ --resource=pods TOKEN=

Use Kubernetes secrets as environment variables in Angular 6

心不动则不痛 提交于 2021-02-08 06:54:44
问题 I configured an automatic build of my Angular 6 app and deployment in Kubernetes each time is push to my code repository (Google Cloud Repository). Dev environment variables are classically store in a environment.ts file like this: export const environment = { production: false, api_key: "my_dev_api_key" }; But I don't want to put my Prod secrets in my repository so I figured I could use Kubernetes secrets. So, I create a secret in Kubernetes: kubectl create secret generic literal-token -