google-kubernetes-engine

Scheduling and scaling pods in kubernetes

冷暖自知 提交于 2021-02-11 13:24:32
问题 i am running k8s cluster on GKE it has 4 node pool with different configuration Node pool : 1 (Single node coroned status) Running Redis & RabbitMQ Node pool : 2 (Single node coroned status) Running Monitoring & Prometheus Node pool : 3 (Big large single node) Application pods Node pool : 4 (Single node with auto-scaling enabled) Application pods currently, i am running single replicas for each service on GKE however 3 replicas of the main service which mostly manages everything. when scaling

Scheduling and scaling pods in kubernetes

↘锁芯ラ 提交于 2021-02-11 13:24:19
问题 i am running k8s cluster on GKE it has 4 node pool with different configuration Node pool : 1 (Single node coroned status) Running Redis & RabbitMQ Node pool : 2 (Single node coroned status) Running Monitoring & Prometheus Node pool : 3 (Big large single node) Application pods Node pool : 4 (Single node with auto-scaling enabled) Application pods currently, i am running single replicas for each service on GKE however 3 replicas of the main service which mostly manages everything. when scaling

GKE Cluster autoscaler profile for older luster

社会主义新天地 提交于 2021-02-11 12:41:55
问题 Now in GKE there is new tab while creating new K8s cluster Automation - Set cluster-level criteria for automatic maintenance, autoscaling, and auto-provisioning. Edit the node pool for automation like auto-scaling, auto-upgrades, and repair. it has two options - Balanced (default) & Optimize utilization (beta) cant we set this for older cluster any work around? we are running old GKE version 1.14 we want to auto-scale cluster when 70% of resource utilization of existing nodes. Currently, we

How to programmatically modify a running k8s pod status conditions?

生来就可爱ヽ(ⅴ<●) 提交于 2021-02-11 12:01:40
问题 I'm trying to modify the running state of my pod, managed by a deployment controller both from command line via kubectl patch and from the k8s python client API. Neither of them seem to work From the command line, I tried both strategic merge match and JSON merge patch, but neither of them works. For e.g. I'm trying to patch the pod conditions to make the status field to False kubectl -n foo-ns patch pod foo-pod-18112 -p '{ "status": { "conditions": [ { "type": "PodScheduled", "status":

Expose service on custom port via `https` on GKE

﹥>﹥吖頭↗ 提交于 2021-02-11 06:56:38
问题 I am new to Kubernetes (GKE to be specific), this is my third week, so bare with me. I've been tasked to expose a statefulset via https like this: - https://example.com/whateva -> service:8080 (+Google Cloud CDN) - https://example.com:5001 -> service:9095 I have been trying for a week now. It was under the impression that this requirement was pretty straight forward? Can anyone point me in the right direction? Questions: I would like to use the managedcertificate from the google cloud but it

expose private kubernetes cluster with NodePort type service

心不动则不痛 提交于 2021-02-10 14:19:58
问题 I have created a VPC-native cluster on GKE, master authorized networks disabled on it. I think I did all things correctly but I still can't access to the app externally. Below is my service manifest. apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.16.0 (0c01309) creationTimestamp: null labels: io.kompose.service: app name: app spec: ports: - name: '3000' port: 80 targetPort: 3000 protocol: TCP nodePort: 30382 selector: io.kompose.service:

GKE VPC Native Cluster and Connectivity to Cloud SQL

和自甴很熟 提交于 2021-02-10 12:57:51
问题 What is "VPC Native" in GKE cluster? Does "VPC Native disabled GKE cluster" restrict connecting to Cloud SQL via Private IP? We have a GKE cluster whose "VPC Native" is disabled and we have whitelisted GKE cluster in cloud sql, even post that connectivity fails. Also, what is the recommended way to connect cloud sql from private GKE cluster? Suppose we have an application which we are migrating from AWS to GKE, we don't want to build cloud proxy. 回答1: The VPC Native in GKE changes the way

pubsub.NewClient method stuck on GKE golang

时间秒杀一切 提交于 2021-02-10 07:37:08
问题 I am developing a golang app that uses Google Pub/Sub client library. I am using Google container engine for deployment. I followed the following steps for deployment - Build golang binary using CGO_ENABLED=0 GOOS=linux go build -o bin/app app.go Build a docker image using dockerfile shown below. Create kubernetes deployment. Dockerfile - FROM scratch ADD bin/app / CMD ["/app"] The app starts fine and I can see some initial debug logs. However, when I try to instantiate a pub/sub client using

Unable to connect to CloudSQL from Kubernetes Engine (Can't connect to MySQL server on 'localhost')

南笙酒味 提交于 2021-02-10 06:30:52
问题 I tried to follow the steps in: https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine. I have the application container and the cloudsql proxy container running in the same pod. After creating the cluster, logs for the proxy container seems correct: $kubectl logs users-app-HASH1-HASH2 cloudsql-proxy 2018/08/03 18:58:45 using credential file for authentication; email=it-test@tutorial-bookshelf-xxxxxx.iam.gserviceaccount.com 2018/08/03 18:58:45 Listening on 127.0.0.1:3306 for

CrashLoopBackOff (Mongo in Docker/Kubernetes) - Failed to start up WiredTiger under any compatibility version

落花浮王杯 提交于 2021-02-08 10:27:19
问题 I'm suddenly facing some issues in my Kubernetes application (with no event to explain it). The application has been working properly during one year but now I'm getting a CrashLoopBackOff status. IMPORTANT UPDATE: I cannot update the Mongo replication controller in GKE, because when I commit the changes in mongo.yml (from GIT) all workloads update except mongo-controller (which is down). In GKE in Workloads/Mongo-controller/Managed pods I can see that the "Created on" date is some days ago