google-kubernetes-engine

Avoiding Error 429 (quota exceeded) while working with Google Cloud Registry

北城以北 提交于 2020-06-17 15:51:26
问题 I'm hitting 429 error in Google Container Registry, too many images are pulled simultaneously Error: Status 429 trying to pull repository [...] "Quota Exceeded." There is a Kubernetes cluster with multiple nodes and pods implement Kubeflow steps. In the Google guide they suggest the following: To avoid hitting the fixed quota limit, you can: - Increase the number of IP addresses talking to Container Registry. Quotas are per IP address. - Add retries that introduce a delay. For example, you

How to run periodic volume snapshots using k8s client and cronjob

廉价感情. 提交于 2020-06-17 09:49:27
问题 I have a volume PersistentVolumeClaim that I want to run snapshots for. I know there is VolumeSnapshot docs. I think the best way to run periodic snapshots is to create a CronJob for that. So I've created a docker image with python k8s client and my custom script. This way I'm able to run it whenever I want and I can access kube config and all resources directly from the pod. FROM python:3.8-slim-buster RUN apt-get -qq update && apt-get -qq install -y git COPY . . RUN pip install --upgrade

Why GKE Ingress controller gives 404 error

爱⌒轻易说出口 提交于 2020-06-17 09:42:34
问题 We have a below code for Ingress and "/demo" app is running fine with REST API Get call response. However "/um" is not opening and its giving 404 error. UM is a front-end app built in Angular 6 and it should open an index page. When we expose this application as a External IP i.e. Type:LoadBalancer, then application is working fine. The same is encountering 404 when try from Ingress setup. Not sure what commits this issue. The below is our sample Ingress deployment file. Kindly through some

ImagePullBackOff on GKE with Private Google Cloud Repository

半腔热情 提交于 2020-06-17 05:36:13
问题 I am creating a deployment in GKE with a following (standard) deployment apiVersion: apps/v1 kind: Deployment metadata: name: api-deployment spec: replicas: 1 selector: matchLabels: component: api template: metadata: labels: component: api spec: containers: - name: api image: eu.gcr.io/xxxx-xxx/api:latest imagePullPolicy: Always resources: requests: memory: "320Mi" cpu: "100m" limits: memory: "450Mi" cpu: "150m" ports: - containerPort: 5010 However, for some reason GKE complains about a

Google Cloud Endpoints (ESP) gRPC transcoding to camel case

十年热恋 提交于 2020-06-13 05:41:11
问题 I have deployed a gRPC server using Google Cloud Endpoints / ESP, following the instructions here: https://cloud.google.com/endpoints/docs/grpc/get-started-kubernetes-engine In my proto file, my fields are named in snake_case, following the Protocol Buffers naming conventions (https://developers.google.com/protocol-buffers/docs/style#message-and-field-names), as below: message MyMessage { string my_field = 1; } When deploying to Cloud Endpoints, the field names are converted to camelCase. So

Container logs not working after cluster update on GKE

杀马特。学长 韩版系。学妹 提交于 2020-06-01 01:40:53
问题 Recently I did an upgrade on my cluster that's running multiple containers for microservices written in Java (using default Spring Boot's log4j2 default configuration). Since then, the container log is not being updated anymore. The kubectl logs command is working fine, all the recent logs can be seen using this command, but the logs that should be appearing in the GKE dashboard is simply not working anymore. I checked the Google's Loggin API and it's enabled. Does anyone know what's the

Container logs not working after cluster update on GKE

无人久伴 提交于 2020-06-01 01:36:11
问题 Recently I did an upgrade on my cluster that's running multiple containers for microservices written in Java (using default Spring Boot's log4j2 default configuration). Since then, the container log is not being updated anymore. The kubectl logs command is working fine, all the recent logs can be seen using this command, but the logs that should be appearing in the GKE dashboard is simply not working anymore. I checked the Google's Loggin API and it's enabled. Does anyone know what's the

Kubernetes object size limitations

最后都变了- 提交于 2020-05-29 05:20:08
问题 I am dealing with CRDs and creating Custom resources. I need to keep lots of information about my application in the Custom resource. As per the official doc, etcd works with request up to 1.5MB. I am hitting errors something like "error": "Request entity too large: limit is 3145728" I believe the specified limit in the error is 3MB. Any thoughts around this? Any way out for this problem? 回答1: The "error": "Request entity too large: limit is 3145728" is probably the default response from

stackdriver-metadata-agent-cluster-level gets OOMKilled

情到浓时终转凉″ 提交于 2020-05-25 04:33:25
问题 I updated a GKE cluster from 1.13 to 1.15.9-gke.12. In the process I switched from legacy logging to Stackdriver Kubernetes Engine Monitoring. Now I have the problem that the stackdriver-metadata-agent-cluster-level pod keeps restarting because it gets OOMKilled . The memory seems to be just fine though. The logs also look just fine (same as the logs of a newly created cluster): I0305 08:32:33.436613 1 log_spam.go:42] Command line arguments: I0305 08:32:33.436726 1 log_spam.go:44] argv[0]: '

Duplicate log entries with Google Cloud Stackdriver logging of Python code on Kubernetes Engine

好久不见. 提交于 2020-05-25 04:29:20
问题 I have a simple Python app running in a container on Google Kubernetes Engine. I am trying to connect the standard Python logging to Google Stackdriver logging using this guide. I have almost succeeded, but I am getting duplicate log entries with one always at the 'error' level... Screenshot of Stackdriver logs showing duplicate entries This is my python code that set's up the logging according to the above guide: import webapp2 from paste import httpserver import rpc # Imports the Google