google-kubernetes-engine

`docker-credential-gcloud` not in system PATH

ぐ巨炮叔叔 提交于 2019-11-30 10:49:41
After the latest updates to gcloud and docker I'm unable to access images on my google container repository. Locally when I run: gcloud auth configure-docker as per the instructions after updating gcloud, I get the following message: WARNING: `docker-credential-gcloud` not in system PATH. gcloud's Docker credential helper can be configured but it will not work until this is corrected. gcloud credential helpers already registered correctly. Running which docker-credential-gcloud returns docker-credential-gcloud not found . I have no other gcloud-related path issues and for the life of me can't

Creating image pull secret for google container registry that doesn't expire?

点点圈 提交于 2019-11-30 08:27:41
I'm trying to get Kubernetes to download images from a Google Container Registry from another project. According to the docs you should create an image pull secret using: $ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL But I wonder what DOCKER_USER and DOCKER_PASSWORD I should use for authenticating with Google Container Registry? Looking at the GCR docs it says that the password is the access token that you can get by running: $ gcloud auth print-access

Autoscaling in Google Container Engine

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-30 07:11:41
I understand the Container Engine is currently on alpha and not yet complete. From the docs I assume there is no auto-scaling of pods (e.g. depending on CPU load) yet, correct? I'd love to be able to configure a replication controller to automatically add pods (and VM instances) when the average CPU load reaches a defined threshold. Is this somewhere on the near future roadmap? Or is it possible to use the Compute Engine Autoscaler for this? (if so, how?) As we work towards a Beta release, we're definitely looking at integrating the Google Compute Engine AutoScaler. There are actually two

Cluster autoscaler not downscaling

点点圈 提交于 2019-11-30 05:21:36
I have a regional cluster set up in google kubernetes engine (GKE) . The node group is a single vm in each region (3 total) . I have a deployment with 3 replicas minimum controlled by a HPA. The nodegroup is configured to be autoscaling (cluster autoscaling aka CA). The problem scenario: Update deployment image. Kubernetes automatically creates new pods and the CA identifies that a new node is needed. I now have 4. The old pods get removed when all new pods have started, which means I have the exact same CPU request as the minute before. But the after the 10 min maximum downscale time I still

Can a Persistent Volume be resized?

女生的网名这么多〃 提交于 2019-11-30 01:03:22
I'm running a MySQL deployment on Kubernetes however seems like my allocated space was not enough, initially I added a persistent volume of 50GB and now I'd like to expand that to 100GB . I already saw the a persistent volume claim is immutable after creation, but can I somehow just resize the persistent volume and then recreate my claim? Yes, as of 1.11, persistent volumes can be resized on certain cloud providers. To increase volume size: Edit the PVC size using kubectl edit pvc $your_pvc Terminate the pod using the volume. Once the pod using the volume is terminated, the filesystem is

Access Kubernetes GKE cluster outside of GKE cluster with client-go?

狂风中的少年 提交于 2019-11-29 22:14:46
问题 I have multiple kubernetes clusters running on GKE (let's say clusterA and clusterB) I want to access both of those clusters from client-go in an app that is running in one of those clusters (e.g. access clusterB from an app that is running on clusterA) I general for authenticating with kubernetes clusters from client-go I see that I have two options: InCluster config or from kube config file So it is easy to access clusterA from clusterA but not clusterB from clusterA. What are my options

How do I run private docker images on Google Container Engine

旧时模样 提交于 2019-11-29 20:20:48
How do I run a docker image that I built locally on Google Container Engine ? proppy You can push your image to Google Container Registry and reference them from your pod manifest. Detailed instructions Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed. Setup some environment variables gcloud components update kubectl gcloud config set project <your-project> gcloud config set compute/zone <your-cluster-zone> gcloud config set container/cluster <your-cluster-name> gcloud container clusters get-credentials <your

GKE: Pubsub messages between pods with push subscribers

偶尔善良 提交于 2019-11-29 15:52:16
问题 I am using GKE deployment with multiple pods and I need to send and receive messages between pods. I want to use pubsub push subscribers. I found for push I need to configure https access for subscribers pods. In order to receive push messages, you need a publicly accessible HTTPS server to handle POST requests. The server must present a valid SSL certificate signed by a certificate authority and routable by DNS. You also need to validate that you own the domain (or have equivalent access to

Implementing workaround for missing http->https redirection in ingress-gce with GLBC

依然范特西╮ 提交于 2019-11-29 15:28:01
I am trying to wrap my brain around the suggested workarounds for the lack of built-in HTTP->HTTPS redirection in ingress-gce, using GLBC. What I am struggling with is how to use this custom backend that is suggested as one option to overcome this limitation (e.g. in How to force SSL for Kubernetes Ingress on GKE ). In my case the application behind the load-balancer does not itself have apache or nginx, and I just can't figure out how to include e.g. apache (which I know way better than nginx) in the setup. Am I supposed to set apache in front of the application as a proxy? In that case I

Google Cloud Build deploy to GKE Private Cluster

前提是你 提交于 2019-11-29 13:43:05
问题 I'm running a Google Kubernetes Engine with the "private-cluster" option. I've also defined "authorized Master Network" to be able to remotely access the environment - this works just fine. Now I want to setup some kind of CI/CD pipeline using Google Cloud Build - after successfully building a new docker image, this new image should be automatically deployed to GKE. When I first fired off the new pipeline, the deployment to GKE failed - the error message was something like: "Unable to connect