google-kubernetes-engine

Define size for /dev/shm on container engine

别等时光非礼了梦想. 提交于 2019-11-28 18:23:05
I'm running Chrome with xvfb on Debian 8. It works until I open a tab and try to load content. The process dies silently... Fortunately, I have gotten it to run smoothly on my local docker using docker run --shm-size=1G . There is a known bug in Chrome that causes it to crash when /dev/shm is too small. I am deploying to Container engine, and inspecting the OS specs. The host OS has a solid 7G mounted to /dev/shm, but the actual container is only allocated 64M. Chrome crashes. How can I set the size of /dev/shm when using kubectl to deploy to container engine? Mounting an emptyDir to /dev/shm

How can I trigger a Kubernetes Scheduled Job manually?

≡放荡痞女 提交于 2019-11-28 16:47:33
I've created a Kubernetes Scheduled Job , which runs twice a day according to its schedule. However, I would like to trigger it manually for testing purposes. How can I do this? pedro_sland The issue #47538 that @jdf mentioned is now closed and this is now possible. The original implementation can be found here but the syntax has changed. With kubectl v1.10.1+ the command is: kubectl create job --from=cronjob/<cronjob-name> <job-name> It seems to be backwardly compatible with older clusters as it worked for me on v0.8.x. You can create a simple job based on your ScheduledJob. If you already

How do I run private docker images on Google Container Engine

£可爱£侵袭症+ 提交于 2019-11-28 15:49:02
问题 How do I run a docker image that I built locally on Google Container Engine? 回答1: You can push your image to Google Container Registry and reference them from your pod manifest. Detailed instructions Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed. Setup some environment variables gcloud components update kubectl gcloud config set project <your-project> gcloud config set compute/zone <your-cluster-zone>

Is it necessary to recreate a Google Container Engine cluster to modify API permissions?

时光怂恿深爱的人放手 提交于 2019-11-28 14:09:54
After reading this earlier question , I have some follow-up questions. I have a Google Container Engine cluster which lacks the Cloud Monitoring API Access permission. According to this post I cannot enable it. The referenced post is one year old. Just to be sure: Is it still correct? To enable (for example) the Cloud Monitoring API for my GKE cluster, we would have to recreate the entire cluster because there is no way to change these permissions after cluster creation? Also, if I have to do this it seems to me that it would be best to enable all API's with the broadest possible permissions,

GCE LoadBalancer : Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1

随声附和 提交于 2019-11-28 12:23:38
In one of my HTTP(S) LoadBalancer, I wish to change my backend configuration to increase the timeout from 30s to 60s (We have a few 502's that do not have any logs server-side, I wish to check if it comes from the LB) But, as I validate the change, I got an error saying Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1 even if i didn't change the namedPort. This issue seems to be the same, but the only solution is a workaround that does not work in my case : Thanks for your help, I'm sure the OP has resolved this by now, but for anyone else pulling their

Changing Permissions of Google Container Engine Cluster

元气小坏坏 提交于 2019-11-28 09:20:22
I have been able to successfully create a Google Container Cluster in the developers console and have deployed my app to it. This all starts up fine, however I find that I can't connect to Cloud SQL, I get; "Error: Handshake inactivity timeout" After a bit of digging, I hadn't had any trouble connecting to the Database from App Engine or my local machine so I thought this was a little strange. It was then I noticed the cluster permissions... When I select my cluster I see the following; Permissions User info Disabled Compute Read Write Storage Read Only Task queue Disabled BigQuery Disabled

Implementing workaround for missing http->https redirection in ingress-gce with GLBC

一世执手 提交于 2019-11-28 09:09:28
问题 I am trying to wrap my brain around the suggested workarounds for the lack of built-in HTTP->HTTPS redirection in ingress-gce, using GLBC. What I am struggling with is how to use this custom backend that is suggested as one option to overcome this limitation (e.g. in How to force SSL for Kubernetes Ingress on GKE). In my case the application behind the load-balancer does not itself have apache or nginx, and I just can't figure out how to include e.g. apache (which I know way better than nginx

Kubernetes pods failing on “Pod sandbox changed, it will be killed and re-created”

旧巷老猫 提交于 2019-11-28 07:19:44
问题 On a Google Container Engine cluster (GKE), I see sometimes a pod (or more) not starting and looking in its events, I can see the following Pod sandbox changed, it will be killed and re-created. If I wait - it just keeps re-trying. If I delete the pod, and allow them to be recreated by the Deployment's Replica Set, it will start properly. The behavior is inconsistent. Kubernetes versions 1.7.6 and 1.7.8 Any ideas? 回答1: I can see following message posted in Google Cloud Status Dashboard: "We

ingress-nginx - create one ingress per host? Or combine many hosts into one ingress and reload?

房东的猫 提交于 2019-11-28 01:36:51
问题 I'm building a service where users can build web apps - these apps will be hosted under a virtual DNS name *.laska.io For example, if Tom and Jerry both built an app, they'd have it hosted under: tom.laska.io jerry.laska.io Now, suppose I have 1000 users. Should I create one big ingress that looks like this? apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: rules: -

How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes

筅森魡賤 提交于 2019-11-27 16:22:12
问题 with the help of kubernetes I am running daily jobs on GKE, On a daily basis based on cron configured in kubernetes a new container spins up and try to insert some data into BigQuery. The setup that we have is we have 2 different projects in GCP in one project we maintain the data in BigQuery in other project we have all the GKE running so when GKE has to interact with different project resource my guess is I have to set an environment variable with name GOOGLE_APPLICATION_CREDENTIALS which