google-kubernetes-engine

GCP Kubernetes created 6 nodes when num-nodes was set to 2

左心房为你撑大大i 提交于 2020-06-28 05:32:07
问题 I am following this tutorial to configure Kubernetes on GCP. https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#clean-up I run this command to create a cluster following the suggestion from here - GKE: Insufficient regional quota to satisfy request: resource "IN_USE_ADDRESSES" gcloud container clusters create name-cluster --num-nodes=2 When I list the nodes using gcloud compute instances list I notice that I have got more than 2 nodes!! Why? NAME LOCATION MASTER_VERSION

How to update api versions list in Kubernetes

心已入冬 提交于 2020-06-27 19:29:21
问题 I am trying to use "autoscaling/v2beta2" apiVersion in my configuration following this tutorial. And also I am on Google Kubernetes Engine. However I get this error: error: unable to recognize "backend-hpa.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2beta2" When I list the available api-versions: $ kubectl api-versions admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 apps

How do I move pods to a new node pool/ instance group

这一生的挚爱 提交于 2020-06-27 15:53:41
问题 I have a Gke cluster with one node pool attached I want to make some changes to the node pool though- like adding tags, etc So I created a new node pool with my new config and attached to the cluster. so now cluster has 2 node pools. At this point I want to move the pods to the new node pool and destroy the old one How is this process done? Am I doing this right? 回答1: There are multiple ways to move your pods to the new node pool. One way is to steer your pods to the new node pool using a

Volume claim on GKE / Multi-Attach error for volume Volume is already exclusively attached

|▌冷眼眸甩不掉的悲伤 提交于 2020-06-24 22:24:20
问题 The problem seems to have been solved a long time ago, as the answer and the comments does not provide real solutions, I would like to get some help from experienced users The error is the following (when describing the pod, which keeps on the ContainerCreating state) : Multi-Attach error for volume "pvc-xxx" Volume is already exclusively attached to one node and can't be attached to another This all run on GKE. I had a previous cluster, and the problem never occured. I have reused the same

Unhealthy nodes for load balancer when using nginx ingress controller on GKE

江枫思渺然 提交于 2020-06-24 11:44:09
问题 I have set up the nginx ingress controller following this guide. The ingress works well and I am able to visit the defaultbackend service and my own service as well. But when reviewing the objects created in the Google Cloud Console, in particular the load balancer object which was created automatically, I noticed that the health check for the other nodes are failing: Is this because the ingress controller process is only running on the one node, and so it's the only one that passes the

Google Cloud build conditional step

落花浮王杯 提交于 2020-06-23 03:14:52
问题 This is my cloud build file substitutions: _CLOUDSDK_COMPUTE_ZONE: us-central1-a _CLOUDSDK_CONTAINER_CLUSTER: $_CLOUDSDK_CONTAINER_CLUSTER steps: - name: gcr.io/$PROJECT_ID/sonar-scanner:latest args: - '-Dsonar.host.url=https://sonar.test.io' - '-Dsonar.login=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' - '-Dsonar.projectKey=test-service' - '-Dsonar.sources=.' - id: 'build test-service image' name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT

Google Cloud build conditional step

懵懂的女人 提交于 2020-06-23 03:13:09
问题 This is my cloud build file substitutions: _CLOUDSDK_COMPUTE_ZONE: us-central1-a _CLOUDSDK_CONTAINER_CLUSTER: $_CLOUDSDK_CONTAINER_CLUSTER steps: - name: gcr.io/$PROJECT_ID/sonar-scanner:latest args: - '-Dsonar.host.url=https://sonar.test.io' - '-Dsonar.login=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' - '-Dsonar.projectKey=test-service' - '-Dsonar.sources=.' - id: 'build test-service image' name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT

How to deploy Open Policy Agent in a Google Kubernetes cluster

淺唱寂寞╮ 提交于 2020-06-22 04:24:49
问题 I'm new to k8s, and I want to deploy OPA in the same pod as of my application in Google Kubernetes engine. But I don't know how to do this.Are there any references that I can refer more details about this ? Could you please help me figure out the steps I should follow ? 回答1: It should similar as deploying to any Kubernetes cluster as documented here. The difference could be you may want to use a LoadBalancer type service instead of NodePort. 来源: https://stackoverflow.com/questions/62258321

How to deploy Open Policy Agent in a Google Kubernetes cluster

社会主义新天地 提交于 2020-06-22 04:24:18
问题 I'm new to k8s, and I want to deploy OPA in the same pod as of my application in Google Kubernetes engine. But I don't know how to do this.Are there any references that I can refer more details about this ? Could you please help me figure out the steps I should follow ? 回答1: It should similar as deploying to any Kubernetes cluster as documented here. The difference could be you may want to use a LoadBalancer type service instead of NodePort. 来源: https://stackoverflow.com/questions/62258321

Avoiding Error 429 (quota exceeded) while working with Google Cloud Registry

独自空忆成欢 提交于 2020-06-17 15:52:28
问题 I'm hitting 429 error in Google Container Registry, too many images are pulled simultaneously Error: Status 429 trying to pull repository [...] "Quota Exceeded." There is a Kubernetes cluster with multiple nodes and pods implement Kubeflow steps. In the Google guide they suggest the following: To avoid hitting the fixed quota limit, you can: - Increase the number of IP addresses talking to Container Registry. Quotas are per IP address. - Add retries that introduce a delay. For example, you