google-kubernetes-engine

K8s Ingress rule for multiple paths in same backend service

亡梦爱人 提交于 2019-12-04 13:04:15
I am trying to setup ingress load balancer. Basically, I have a single backend service with multiple paths. Let's say my backend NodePort service name is hello-app. The pod associated with this service exposes multiple paths like /foo and /bar. Below is the example NodePort service and associated deployment apiVersion: v1 kind: Service metadata: name: hello-app spec: selector: app: hello-app type: NodePort ports: - protocol: "TCP" port: 7799 targetPort: 7799 --- apiVersion: apps/v1 kind: Deployment metadata: name: hello-app labels: app: hello-app spec: replicas: 1 selector: matchLabels: app:

Understanding --master-ipv4-cidr when provisioning private GKE clusters

北城以北 提交于 2019-12-04 11:32:16
I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine. Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters gcloud beta container clusters create pr-clust-1 \ --private-cluster \ --master-ipv4-cidr 172.16.0.16/28 \ --enable-ip-alias \ --create-subnetwork "" When I run this command, I see that: I now have a few gke subnets in my VPC belong to the

How do I disable the Stackdriver Logging agent in a cluster?

不打扰是莪最后的温柔 提交于 2019-12-04 09:52:35
Our project recently migrated away from Stackdriver Logging. However, I cannot figure out how to get rid of the fluentd-cloud-logging-* pods in the kube-system namespace. If I delete the individual pods, they come right back. How do I kill them off for good? It's not clear to me how they're getting recreated; there is certainly no DaemonSet bringing them back. I already set monitoringService to none in the configuration described by gcloud container clusters describe . The fluentd-cloud-logging pods in the kube-system namespace are defined in the /etc/kubernetes/manifests/ folder of each host

Error from server (Forbidden): error when creating .. : clusterroles.rbac.authorization.k8s.io …: attempt to grant extra privileges:

我们两清 提交于 2019-12-04 08:19:18
问题 Failed to create clusterroles. <> already assigned as the roles of "container engine admin" & "container engine cluster admin" Error from server (Forbidden): error when creating "prometheus- operator/prometheus-operator-cluster-role.yaml": clusterroles.rbac.authorization.k8s.io "prometheus-operator" is forbidden: attempt to grant extra privileges: [{[create] [extensions] [thirdpartyresources] [] []} {[*] [monitoring.coreos.com] [alertmanagers] [] []} {[*] [monitoring.coreos.com] [prometheuses

How do I get logs from all pods of a Kubernetes replication controller?

余生长醉 提交于 2019-12-04 07:44:58
问题 Running kubectl logs shows me the stderr/stdout of one Kubernetes container. How can I get the aggregated stderr/stdout of a set of pods, preferably those created by a certain replication controller? 回答1: You can use labels kubectl logs -l app=elasticsearch 回答2: I've created a small bash script called kubetail that makes this possible. For example to tail all logs for pods named "app1" you can do: kubetail app1 You can find the script here. 回答3: You can get the logs from multiple containers

How to switch kubectl clusters between gcloud and minikube

六月ゝ 毕业季﹏ 提交于 2019-12-04 07:28:08
问题 I have Kubernetes working well in two different environments, namely in my local environment (MacBook running minikube) and as well as on Google's Container Engine (GCE, Kubernetes on Google Cloud). I use the MacBook/local environment to develop and test my YAML files and then, upon completion, try them on GCE. Currently I need to work with each environment individually: I need to edit the YAML files in my local environment and, when ready, (git) clone them to a GCE environment and then use

How do you set a static IP address for a Google Container Engine (GKE) service?

狂风中的少年 提交于 2019-12-04 04:26:43
A little bit of background: I have a Go service that uses gRPC to communicate with client apps. gRPC uses HTTP2, so I can't use Google App Engine or the Google Cloud HTTP Load Balancer. I need raw TCP load balancing from the internet to my Go application. I went through the GKE tutorials and read the various docs and I can't find any way to give my application a static IP address. So how do you get a static IP attached to something running in GKE? This is not supported in kubernetes v1.0.x but in v1.1.x it will be available as service.spec.loadBalancerIP . As long as you actually own that IP

gke cant disable Transparent Huge Pages… permission denied

拈花ヽ惹草 提交于 2019-12-04 03:38:32
问题 I am trying to run a redis image in gke. It works except I get the dreaded "Transparent Huge Pages" warning: WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. Redis is currently

Kubernetes: should I use HTTPS to communicate between services

谁都会走 提交于 2019-12-04 03:28:12
Let's say I'm using an GCE ingress to handle traffic from outside the cluster and terminate TLS ( https://example.com/api/items ), from here the request gets routed to one of two services that are only available inside the cluster. So far so good. What if I have to call service B from service A, should I go all the way and use the cluster's external IP/domain and use HTTPS ( https://example.com/api/user/1 ) to call the service or could I use the internal IP of the service and use HTTP ( http://serviceb/api/user/1 )? Do I have to encrypt the data or is it "safe" as long as it isn't leaving the

How to update Kubernetes Cluster to the latest version available?

☆樱花仙子☆ 提交于 2019-12-04 01:09:22
问题 I began to try Google Container Engine recently. I would you like to upgrade the Kubernetes Cluster to the latest version available, if possible without downtime. Is there any way to do this? 回答1: Unfortunately, the best answer we currently have is to create a new cluster and move your resources over, then delete the old one. We are very actively working on making cluster upgrades reliable (both nodes and the master), but upgrades are unlikely to work for the majority of currently existing