google-kubernetes-engine

How to expose dynamic ports using Kubernetes service on Google Container Engine?

こ雲淡風輕ζ 提交于 2019-12-01 08:15:39
问题 I am trying to connect to a Docker container on Google Container Engine(GKE) from my local machine through the internet by TCP protocol. So far I have used Kubernetes services which gives an external IP address, so the local machine can connect to the container on GKE using the service. When we create a service, we can specify only one port and cannot specify the port range. Please see the my-ros-service.yaml below. In this case, we can access the container by 11311 port from outside of GCE.

GKE: secured access to services from outside the cluster

我是研究僧i 提交于 2019-12-01 06:37:25
Is there any way to access the 'internal' services (those not exposed outside) of the cluster in a secure way from the outside. The goal is simple: I need to debug clients of those services and need to access them, but don't want to expose them outside. On a regular single host I would normally tunnel to the host with SSH and map the ports to localhost; I tried using a SSHD container but that didn't get me very far: the services are not directly on that container so I'm not sure how to get to the next hop on the network since the services are dynamically managing IPs. Ideally a VPN would be

Kubernetes HTTPS Ingress in Google Container Engine

别等时光非礼了梦想. 提交于 2019-12-01 05:39:02
I want to expose a HTTP service running in Google Container Engine over HTTPS only load balancer. How to define in ingress object that I want HTTPS only load balancer instead of default HTTP? Or is there a way to permanently drop HTTP protocol from created load balancer? When I add HTTPS protocol and then drop HTTP protocol, HTTP is recreated after few minutes by the platform. Ingress: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myapp-ingress spec: backend: serviceName: myapp-service servicePort: 8080 In order to have HTTPs service exposed only, you can block traffic on port

Logging to the Google Cloud in Google Container/Compute Engine with Go

亡梦爱人 提交于 2019-12-01 05:22:38
I have a GKE application with 20 nodes running Go. I would like to consolidate all the logs to view in the Google Developers Console log viewer, but I am having 2 problems. I can't get severity filtering, and each newline in my log message starts a new log entry in the viewer (problematic with newlines in the log). I have the google-fluent-d setup so all stdout gets logged in the cloud, and I have made use of log.Lshortfile, call depth and log.Logger.Output to get filename and line number from the "log" library. I've looked at this library: "google.golang.org/cloud/logging" but I am having

How to update Kubernetes Cluster to the latest version available?

こ雲淡風輕ζ 提交于 2019-12-01 03:48:42
I began to try Google Container Engine recently. I would you like to upgrade the Kubernetes Cluster to the latest version available, if possible without downtime. Is there any way to do this? Unfortunately, the best answer we currently have is to create a new cluster and move your resources over, then delete the old one. We are very actively working on making cluster upgrades reliable (both nodes and the master ), but upgrades are unlikely to work for the majority of currently existing clusters. We now have a checked-in upgrade tool for master and nodes: https://github.com/GoogleCloudPlatform

Logging to the Google Cloud in Google Container/Compute Engine with Go

吃可爱长大的小学妹 提交于 2019-12-01 03:24:37
问题 I have a GKE application with 20 nodes running Go. I would like to consolidate all the logs to view in the Google Developers Console log viewer, but I am having 2 problems. I can't get severity filtering, and each newline in my log message starts a new log entry in the viewer (problematic with newlines in the log). I have the google-fluent-d setup so all stdout gets logged in the cloud, and I have made use of log.Lshortfile, call depth and log.Logger.Output to get filename and line number

Is it possible to route traffic to a specific Pod?

纵然是瞬间 提交于 2019-12-01 01:37:03
问题 Say I am running my app in GKE, and this is a multi-tenant application. I create multiple Pods that hosts my application. Now I want: Customers 1-1000 to use Pod1 Customers 1001-2000 to use Pod2 etc. If I have a gcloud global IP that points to my cluster, is it possible to route a request based on the incoming ipaddress/domain to the correct Pod that contains the customers data? 回答1: You can guarantee session affinity with services, but not as you are describing. So, your customers 1-1000 won

How to access client IP of an HTTP request from Google Container Engine?

家住魔仙堡 提交于 2019-12-01 00:05:21
I'm running a gunicorn+flask service in a docker container with Google Container Engine. I set up the cluster following the tutorial at http://kubernetes.io/docs/hellonode/ The REMOTE_ADDR environmental variable always contains an internal address in the Kubernetes cluster. What I was looking for is HTTP_X_FORWARDED_FOR but it's missing from the request headers. Is it possible to configure the service to retain the external client ip in the requests? I assume you set up your service by setting the service's type to LoadBalancer ? It's an unfortunate limitation of the way incoming network-load

Access HTTP service running in GKE from Google Dataflow

依然范特西╮ 提交于 2019-11-30 14:10:11
I have an HTTP service running on a Google Container Engine cluster (behind a kubernetes service). My goal is to access that service from a Dataflow job running on the same GCP project using a fixed name (in the same way services can be reached from inside GKE using DNS). Any idea? Most solutions I have read on stackoverflow relies on having kube-proxy installed on the machines trying to reach the service. As far as I know, it is not possible to reliably set up that service on every worker instance created by Dataflow. One option is to create an external balancer and create an A record in the

Google Cloud Build deploy to GKE Private Cluster

自闭症网瘾萝莉.ら 提交于 2019-11-30 12:44:31
I'm running a Google Kubernetes Engine with the "private-cluster" option. I've also defined "authorized Master Network" to be able to remotely access the environment - this works just fine. Now I want to setup some kind of CI/CD pipeline using Google Cloud Build - after successfully building a new docker image, this new image should be automatically deployed to GKE. When I first fired of the new pipeline, the deployment to GKE failed - the error message was something like: "Unable to connect to the server: dial tcp xxx.xxx.xxx.xxx:443: i/o timeout". As I had the "authorized master networks"