google-kubernetes-engine

how to authenticate to google container engine kubernetes API server using either k8s REST or fabric8 API?

落花浮王杯 提交于 2020-01-07 03:41:15
问题 I'm trying to write a service that spawns container based applications within a Kubernetes cluster. I'd like to do some testing against Google Container Engine as a baseline; however, I am having trouble figuring out how to get my REST client to authenticate with the k8s API server (the master). I came across a good hint here: http://www.scriptscoop.net/t/9c6a16719a43/google-container-engine-rest-api-authorization.html The author says "I know that OAuth works [against Google Container Engine]

Setting up ERROR REPORTING for GKE

穿精又带淫゛_ 提交于 2020-01-06 21:16:47
问题 I am trying to setup Stackdriver Error Reporting for an app deployed to GKE. As I understood there are two ways of doing that: Stackdriver Logging agent and Error Reporting REST API . According to Setting up on Google Compute Engine docs If I already have a running logging agent I can reach it on localhost:24224 . It looks like there already is a logging agent for GKE: ✗ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE fluentd-cloud-logging-gke-tc-default-pool-5713124a

RouteController failed to create a route on GKE

微笑、不失礼 提交于 2020-01-06 08:00:45
问题 I have a cluster on GKE whose node pool I create when I want to use the cluster, and delete when I'm done with it. It's a two node cluster with the master in europe-west2-a and with and whose node zones are europe-west2-a and europe-west2-b . The most recent creation resulted in the node in zone B failing with NetworkUnavailable because RouteController failed to create a route . The reason was because Could not create route xxx 10.244.1.0/24 for node xxx after 342.263706ms: instance not found

Running geth on kubernetes

梦想与她 提交于 2020-01-06 06:55:37
问题 I am running geth full node https://github.com/ethereum/go-ethereum/wiki/geth on Google Cloud platform on a VM instance. Currently, I have mounted a SSD and write the chain data to it. I want to now run it on multiple VM instances and use a load balancer for serving the requests made by Dapp. I can do this using a normal load balancer and create VMs and autoscale. However, I have the following questions: SSD seems to be a very important part of blockchain syncing speed. If I simply create VM

Running geth on kubernetes

馋奶兔 提交于 2020-01-06 06:55:25
问题 I am running geth full node https://github.com/ethereum/go-ethereum/wiki/geth on Google Cloud platform on a VM instance. Currently, I have mounted a SSD and write the chain data to it. I want to now run it on multiple VM instances and use a load balancer for serving the requests made by Dapp. I can do this using a normal load balancer and create VMs and autoscale. However, I have the following questions: SSD seems to be a very important part of blockchain syncing speed. If I simply create VM

Is there a golang sdk equivalent of “gcloud container clusters get-credentials”

余生长醉 提交于 2020-01-06 06:43:30
问题 Is there a golang sdk equivalent of: gcloud container clusters get-credentials I have created a gke cluster using golang sdk google.golang.org/api/container/v1 . Now I want to obtain the kubeconfig for the created cluster. Is there way in golang to achieve that? I have explored the func (r *ProjectsZonesClustersService) Get(projectId string, zone string, clusterId string) *ProjectsZonesClustersGetCall . But this returns the complete cluster configuration not the kubeconfig. I expect to get

Kubernetes - ingress service shows UNHEALTHY state

陌路散爱 提交于 2020-01-06 05:11:12
问题 I am using this tutorial ingress on GCE. The tutorial works fine with docker image in it but with my docker image, I always get UNHEALTHY state of backend service. I added liveness and readiness TCP probes as my application does not respond to '/' with 200. The deployment yaml look like below apiVersion: apps/v1 kind: Deployment metadata: labels: run: neg-demo-app # Label for the Deployment name: neg-demo-app # Name of Deployment spec: # Deployment's specification selector: matchLabels: run:

Reserve a range of static IPs for Kubernetes pods

爷,独闯天下 提交于 2020-01-06 04:34:08
问题 I am attempting to build a Kubernetes cluster on Google Container Engine where its pods do requests to the internet (incoming or egress traffic). These outgoing connections must be limited to a static IP or limited to a range of them. 回答1: The external IP address is the IP address of the node machines in the GKE cluster. You can specify static IP address to these node vms from the VPC Network => External IP addresses. A more complex option would be to create a NAT gateway on a separate VM and

Practical consequences of missing consensus on a Kubernetes cluster?

放肆的年华 提交于 2020-01-05 08:26:17
问题 What exactly are the practical consequences of missing consensus on a Kubernetes cluster? Or in other words: which functions on a Kubernetes cluster require consensus? What will work, what won't work? For example (and really only for example): will existing pods keep running? can pods still be scaled horizontally? Example scenario: A cluster with two nodes loses one node. No consensus possible. 回答1: Consensus is fundamental to etcd - the distributed database that Kubernetes is built upon.

GCLB : Frontend configuration : Error : Invalid value for field 'namedPorts[0].port

左心房为你撑大大i 提交于 2020-01-05 04:29:28
问题 We are configuring Load Balancing (Through Google Console) " Frontend configuration " with 443 and added SSL certificate. But when we click on update configuration I'm receiving below error Error : Invalid value for field 'namedPorts[0].port': '0'. Must be greater than or equal to 1 Anyone ?? can help on same..!! 回答1: I got a work around to deal with this issue, we can configuring Load Balancing ( Through GCLOUD Command line ) " Frontend configuration " with 443 and added SSL certificate.