google-kubernetes-engine

Breakdown of GKE bill based on pods or deployments

柔情痞子 提交于 2019-12-02 09:42:22
I need a breakdown of my usage inside a single project categorized on the basis of Pods or Services or Deployments but the billing section in console doesn't seem to provide such granular information. Is it possible to get this data somehow? I want to know what was the network + compute cost on per deployment or pods. Or maybe if it is possible to have it atleast on the cluster level? Is this breakdown available in BigQuery? Recently it was released a new features in GKE that allows to collect metrics inside a cluster that can also be combined with the exported billing data to separate costs

Workflow for building, pushing, and testing Docker images inside GKE / Kubernetes

删除回忆录丶 提交于 2019-12-02 08:19:43
I am developing a Kubernetes service for deployment in Google Container Egine (GKE). Until recently, I have built Docker images in Google Cloud Shell, but I am hitting quota limits now, because the overall load on the free VM instance where Cloud Shell runs is apparently too high from multiple docker build s and push es. My experience so far is that after about a week or so of sustained work I face the following error message and have to wait for about two days before the Cloud Shell becomes available again. Service usage limits temporarily exceeded. Try connecting later. I have tried to shift

Is the Google Container Engine Kubernetes Service LoadBalancer sending traffic to unresponsive hosts?

人走茶凉 提交于 2019-12-02 08:03:31
问题 Question: Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? "This target pool has no health check, so traffic will be sent to all instances regardless of their status." I have a service (NGINX reverse proxy) that targets specific pods and makes TCP: 80, 443 available. In my example only 1 NGINX pod is running within the instance pool. The Service type is "LoadBalancer". Using Google Container

Google Container Engine: Accessing Cloud Storage

▼魔方 西西 提交于 2019-12-02 07:22:26
问题 I can't get the Application Default Credentials working in Google Container Engine. The docs say that they're intended for App Engine and Compute Engine, but I've been told that they should transparently pass through to a container running on Container Engine. Here's the code that's failing: credentials = GoogleCredentials.get_application_default() service = discovery.build('storage', 'v1', credentials=credentials) The error it's failing with: AssertionError: No api proxy found for service

Is the Google Container Engine Kubernetes Service LoadBalancer sending traffic to unresponsive hosts?

本小妞迷上赌 提交于 2019-12-02 06:50:48
Question: Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? "This target pool has no health check, so traffic will be sent to all instances regardless of their status." I have a service (NGINX reverse proxy) that targets specific pods and makes TCP: 80, 443 available. In my example only 1 NGINX pod is running within the instance pool. The Service type is "LoadBalancer". Using Google Container Engine this creates a new LoadBalancer (LB) that specifies target pools, specific VM Instances. Then a

Dynamically adding/removing named hosts from k8s ingress

丶灬走出姿态 提交于 2019-12-02 05:12:09
问题 I'm setting up a k8s cluster on GKE. A wildcard DNS *.server.com will point to a Ingress controller. Internally to the cluster, there will be webserver pods, each exposing a unique service. The Ingress controller will use the server name to route to the various services. Servers will be created and destroyed on a nearly daily basis. I'd like to know if there's a way to add and remove a named server from the ingress controller without editing the whole list of named servers. 回答1: It appears

RBAC Error in Kubernetes

喜夏-厌秋 提交于 2019-12-02 04:27:17
I have deployed kubernetes v1.8 in my workplace. I have created roles for admin and view access to namespaces 3months ago. In the initial phase RBAC is working as per the access given to the users. Now RBAC is not happening every who has access to the cluster is having clusteradmin access. Can you suggest the errors/changes that had to be done? Ensure the RBAC authorization mode is still being used ( --authorization-mode=…,RBAC is part of the apiserver arguments) If it is, then check for a clusterrolebinding that is granting the cluster-admin role to all authenticated users: kubectl get

Ingress and Ingress controller how to use them with NodePort Services?

。_饼干妹妹 提交于 2019-12-02 02:46:58
问题 I have a single service running on a NodePort service. How do i use ingress to access multiple services. deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: auth spec: replicas: 1 selector: matchLabels: app: auth template: metadata: labels: app: auth tier: backend track: dev spec: containers: - name: auth image: [url]/auth_app:v2 ports: - name: auth containerPort: 3000 service.yml apiVersion: v1 kind: Service metadata: name: auth spec: selector: app: auth tier: backend ports:

Dynamically adding/removing named hosts from k8s ingress

女生的网名这么多〃 提交于 2019-12-02 01:36:12
I'm setting up a k8s cluster on GKE. A wildcard DNS *.server.com will point to a Ingress controller. Internally to the cluster, there will be webserver pods, each exposing a unique service. The Ingress controller will use the server name to route to the various services. Servers will be created and destroyed on a nearly daily basis. I'd like to know if there's a way to add and remove a named server from the ingress controller without editing the whole list of named servers. It appears like you're planning to host multiple domain names on a single Load Balancer (==single Ingress resource). If

GKE - ErrImagePull pulling from Google Container Registry

倖福魔咒の 提交于 2019-12-02 00:57:18
问题 I have a Google Kubernetes Engine cluster which until recently was happily pulling private container images from a Google Container Registry bucket. I haven't changed anything, but now when I update my Kubernetes Deployments, it's unable to launch new pods, and I get the following events: Normal Pulling 14s kubelet, <node-id> pulling image "gcr.io/cloudsql-docker/gce-proxy:latest" Normal Pulling 14s kubelet, <node-id> pulling image "gcr.io/<project-id>/backend:62d634e" Warning Failed 14s