google-kubernetes-engine

Kubernetes: runContainer: API error (500): Cannot start container (docker failed to umount)

别来无恙 提交于 2019-12-08 04:16:22
问题 Sometimes pod creation fails with the 500 error on our GKE cluster: 1m 1m 1 installer-u57ab1f7707b03 Pod Normal Scheduled {default-scheduler } Successfully assigned installer-u57ab1f7707b03 to gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l 1m 1m 1 installer-u57ab1f7707b03 Pod Warning FailedSync {kubelet gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l} Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container

GCE/GKE Kubectl: the server doesn't have a resource type “services”

空扰寡人 提交于 2019-12-08 03:54:40
问题 I have two kubernetes clusters on google container engine but on seperate google accounts (one using my company's email and another using my personal email). I attempted to switch from one cluster to another. I did this by: Logging in with my other email address $ gcloud init Getting new kubectl credentials gcloud container cluster get-credentials Test to see if connected to new cluster $ kubectl get po However, I was still not able to get the kubernetes resources in the cluster. The error I

Allow only one pod of a type on a node in Kubernetes

ぐ巨炮叔叔 提交于 2019-12-08 02:36:25
问题 How to allow only one pod of a type on a node in Kubernetes. Daemon-sets doesn't fit into this use-case. For e.g. - Restricting scheduling of only one Elasticsearch pod on a node, to prevent data loss in case the node goes down. It can be achieved by carefully planning CPU/memory resource of pod and machine type of cluster. Is there any other way to do so? 回答1: Kubernetes 1.4 introduced Inter-pod affinity and anti-affinity . From the documentation: Inter-pod affinity and anti-affinity allow

How to pass `sysctl` flags to docker from k8s?

╄→гoц情女王★ 提交于 2019-12-08 02:18:30
问题 Scanario: I have a container image that needs to run with net.core.somaxconn > default_value. I am using Kubernetes to deploy and run in GCE. The nodes (vms) in my cluster are configured with correct net.core.somaxconn value. Now the challenge is to start the docker container with flag --sysctl=net.core.somaxconn=4096 from kubernetes. I cannot seem to find the proper documentation to achieve this. Am I missing something obvious? 回答1: Solution 1 : use this answer as a template to see how to

User “system:anonymous” cannot proxy services in the namespace “kube-system”.: “No policy matched.\nUnknown user \”system:anonymous\“”

走远了吗. 提交于 2019-12-08 01:38:55
问题 I am getting the following error when trying to access the Kubernetes dashboard found in the cluster info: kubectl cluster-info Also pops up in incognito mode in Chrome: User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched.\nUnknown user \"system:anonymous\"" 回答1: I was able to access it with a local proxy by running: kubectl proxy And then navigating to http://127.0.0.1:8001/ui (http://127.0.0.1:8001/api/v1/namespaces/kube-system/services

gke nginx lb health checks / can't get all instances in a “healthy” state

扶醉桌前 提交于 2019-12-08 01:02:08
问题 Using nginx nginx-ingress-controller:0.9.0 , below is the permanent state of the google cloud load balancer : Basically, the single healthy node is the one running the nginx-ingress-controller pods. Besides not looking good on this screen, everything works super fine. Thing is, Im' wondering why such bad notice appears on the lb Here's the service/deployment used Am just getting a little lost over how thing works; hope to get some experienced feedback on how to do thing right (I mean, getting

Ingress backend REST error: The server encountered a temporary error and could not complete your request. Please try again in 30 seconds

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-07 22:51:30
问题 I am deploying application at google kubernetes engine. Applicaion has 2 services. There is also Ingress wich i am trying to use to expose one service and ingress also used for https support. I have 1 NodePort service "gateway" and ClusterIp service "internal". "Internal" should be accessed from gateway. Here is services config: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: x-ingress annotations: kubernetes.io/ingress.global-static-ip-name: x-x-ip kubernetes.io/tls-acme: "true"

Kubernates pass env variable to “kubectl create”

不问归期 提交于 2019-12-07 18:38:27
问题 I need to pass dynamic env variable to kubectl create . Something like this kubectl create -f app.yaml --Target=prod Based on Target code deploys on different servers. 回答1: You can achieve this in two ways: Use Helm. It is a "package manager" for Kubernetes and is built exactly for your use case (dynamic variables to configure behaviour of your resources). If it is only a single variable, "converting" your deployment is as simple as creating a new Helm chart, copy your files into templates/ ,

google app engine deploy a custom vm app takes a long time to deploy

浪子不回头ぞ 提交于 2019-12-07 13:14:03
问题 Here is my worker.yaml runtime: custom #python27 api_version: 1 threadsafe: false vm: true service: worker env_variables: PYTHON_ENV: lab network: instance_tag: testing123 name: dev handlers: - url: /.* script: Framework.Workers.PushQueues.worker.app login: admin Dockerfile FROM us.gcr.io/smiling-diode-638/basic-algo-docker-v2 and the console output: gcloud app deploy worker.yaml --verbosity='debug' ✱ DEBUG: Running gcloud.app.deploy with Namespace(__calliope_internal_deepest_parser

Service Account throws an insufficient permission error even it has 'owner' privileges

痞子三分冷 提交于 2019-12-07 13:06:39
问题 In Google Cloud Platform I created a SERVICE ACCOUNT and assigned the OWNER and SERVICE ACCOUNT ACTOR role. When I run command below gcloud container clusters get-credentials travis-test --zone us-central1-c --project phantom-zone-00001 it returns error below Fetching cluster endpoint and auth data. ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required “container.clusters.get” permission for “projects/phantom-zone-00001/zones/us-central1-c/clusters