google-kubernetes-engine

Ingress backend REST error: The server encountered a temporary error and could not complete your request. Please try again in 30 seconds

寵の児 提交于 2019-12-06 09:56:47
I am deploying application at google kubernetes engine. Applicaion has 2 services. There is also Ingress wich i am trying to use to expose one service and ingress also used for https support. I have 1 NodePort service "gateway" and ClusterIp service "internal". "Internal" should be accessed from gateway. Here is services config: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: x-ingress annotations: kubernetes.io/ingress.global-static-ip-name: x-x-ip kubernetes.io/tls-acme: "true" labels: app: gateway spec: tls: - secretName: secret hosts: - x.x.com backend: serviceName: gateway

Set value in dependency of Helm chart

旧巷老猫 提交于 2019-12-06 07:48:35
问题 I want to use the postgresql chart as a requirements for my Helm chart. My requirements.yaml file hence looks like this: dependencies: - name: "postgresql" version: "3.10.0" repository: "@stable" In the postgreSQL Helm chart I now want to set the username with the property postgresqlUsername (see https://github.com/helm/charts/tree/master/stable/postgresql for all properties). Where do I have to specify this property in my project so that it gets propagated to the postgreSQL dependency? 回答1:

K8s Ingress rule for multiple paths in same backend service

馋奶兔 提交于 2019-12-06 06:29:53
问题 I am trying to setup ingress load balancer. Basically, I have a single backend service with multiple paths. Let's say my backend NodePort service name is hello-app. The pod associated with this service exposes multiple paths like /foo and /bar. Below is the example NodePort service and associated deployment apiVersion: v1 kind: Service metadata: name: hello-app spec: selector: app: hello-app type: NodePort ports: - protocol: "TCP" port: 7799 targetPort: 7799 --- apiVersion: apps/v1 kind:

How do I disable the Stackdriver Logging agent in a cluster?

与世无争的帅哥 提交于 2019-12-06 06:20:12
问题 Our project recently migrated away from Stackdriver Logging. However, I cannot figure out how to get rid of the fluentd-cloud-logging-* pods in the kube-system namespace. If I delete the individual pods, they come right back. How do I kill them off for good? It's not clear to me how they're getting recreated; there is certainly no DaemonSet bringing them back. I already set monitoringService to none in the configuration described by gcloud container clusters describe . 回答1: The fluentd-cloud

How to pass `sysctl` flags to docker from k8s?

与世无争的帅哥 提交于 2019-12-06 06:09:10
Scanario: I have a container image that needs to run with net.core.somaxconn > default_value. I am using Kubernetes to deploy and run in GCE. The nodes (vms) in my cluster are configured with correct net.core.somaxconn value. Now the challenge is to start the docker container with flag --sysctl=net.core.somaxconn=4096 from kubernetes. I cannot seem to find the proper documentation to achieve this. Am I missing something obvious? Solution 1 : use this answer as a template to see how to configure the whole node to that sysctl value; you can use something like echo 4096 >/proc/sys/net/core

kubernetes dns lookup of microservice

给你一囗甜甜゛ 提交于 2019-12-06 05:49:22
i have a question on the kubernetes DNS lookup, if i am using in my services deployment "dns" instead of "env", Can my microservice using another microservices in the cluster get the dns names of all the microservices? I get this piece of code, if I use env then I get the the info of host from env. but if I am using dns what format and how do I get the dns names, is there a DNS object I can query on the client side? if (isset($_GET['cmd']) === true) { $host = 'redis-master'; if (getenv('GET_HOSTS_FROM') == 'env') { $host = getenv('REDIS_MASTER_SERVICE_HOST'); } Ref: https://github.com

Understanding --master-ipv4-cidr when provisioning private GKE clusters

若如初见. 提交于 2019-12-06 04:37:34
问题 I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine. Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters gcloud beta container clusters create pr-clust-1 \ --private-cluster \ --master-ipv4-cidr 172.16.0.16/28 \ --enable-ip-alias \ --create

Allow only one pod of a type on a node in Kubernetes

喜欢而已 提交于 2019-12-06 04:12:05
How to allow only one pod of a type on a node in Kubernetes. Daemon-sets doesn't fit into this use-case. For e.g. - Restricting scheduling of only one Elasticsearch pod on a node, to prevent data loss in case the node goes down. It can be achieved by carefully planning CPU/memory resource of pod and machine type of cluster. Is there any other way to do so? Kubernetes 1.4 introduced Inter-pod affinity and anti-affinity . From the documentation : Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to schedule on based on labels on pods that are already

Resize a gkePersistentDisk pod volume after gcloud compute disks resize

吃可爱长大的小学妹 提交于 2019-12-06 04:03:06
问题 Volumes created through GKE can easily be resized using gcloud compute disks resize [volume name from kubectl get pv] . The pod will keep running. However a df in the pod will still report the same size. More importantly kubectl describe pv will still report the original "capacity". Is there a way to grow the pod's actual storage space on the volume? Official support may be in the roadmap, according to https://github.com/kubernetes/kubernetes/issues/24255#issuecomment-210227126, but where is

google app engine deploy a custom vm app takes a long time to deploy

倾然丶 夕夏残阳落幕 提交于 2019-12-06 03:10:49
Here is my worker.yaml runtime: custom #python27 api_version: 1 threadsafe: false vm: true service: worker env_variables: PYTHON_ENV: lab network: instance_tag: testing123 name: dev handlers: - url: /.* script: Framework.Workers.PushQueues.worker.app login: admin Dockerfile FROM us.gcr.io/smiling-diode-638/basic-algo-docker-v2 and the console output: gcloud app deploy worker.yaml --verbosity='debug' ✱ DEBUG: Running gcloud.app.deploy with Namespace(__calliope_internal_deepest_parser=ArgumentParser(prog='gcloud.app.deploy', usage=None, description='Deploy the local code and/or configuration of