google-kubernetes-engine

Is there a way to resize a GKE cluster to 0 nodes after a certain amount of idle time?

我们两清 提交于 2020-02-25 06:51:38
问题 I have a GKE cluster that I want to have sitting at 0 nodes, scale up to 3 nodes to perform a task, and then after a certain amount of idle time, scale back down to 0. Is there a way to do this? 回答1: A GKE cluster can never scale down to 0 because of the system pods running in the cluster. The pods running in the kube-system namespace count against resource usage in your nodes thus the autoscaler will never make the decision to scale the entire cluster down to 0 It is definitely possible to

No access token in .kube/config

允我心安 提交于 2020-02-24 04:13:57
问题 After upgrading my cluster in GKE the dashboard will no longer accept certificate authentication. No problem there's a token available in the .kube/config says my colleague user: auth-provider: config: access-token: REDACTED cmd-args: config config-helper --format=json cmd-path: /home/user/workspace/google-cloud-sdk/bin/gcloud expiry: 2018-01-09T08:59:18Z expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp Except in my case there isn't... user: auth

Google Container Engine Clusters in different regions with cloud load balancer

无人久伴 提交于 2020-02-22 08:09:39
问题 Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters? 回答1: Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service. However it would be

Google Container Engine Clusters in different regions with cloud load balancer

人走茶凉 提交于 2020-02-22 08:08:27
问题 Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters? 回答1: Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service. However it would be

how do I add a firewall rule to a gke service?

▼魔方 西西 提交于 2020-02-22 07:42:46
问题 Its not clear to me how to do this. I create a service for my cluster like this: kubectl expose deployment my-deployment --type=LoadBalancer --port 8888 --target-port 8888 And now my service is accessible from the internet on port 8888. But I dont want that, I only want to make my service accessible from a list of specific public IPs. How do I apply a gcp firewall rule to a specific service? Not clear how this works and why by default the service is accessible publicly from the internet. 回答1:

how do I add a firewall rule to a gke service?

守給你的承諾、 提交于 2020-02-22 07:42:26
问题 Its not clear to me how to do this. I create a service for my cluster like this: kubectl expose deployment my-deployment --type=LoadBalancer --port 8888 --target-port 8888 And now my service is accessible from the internet on port 8888. But I dont want that, I only want to make my service accessible from a list of specific public IPs. How do I apply a gcp firewall rule to a specific service? Not clear how this works and why by default the service is accessible publicly from the internet. 回答1:

Kubernetes NetworkPolicy allow loadbalancer

耗尽温柔 提交于 2020-02-20 09:08:28
问题 I have a Kubernetes cluster running on Google Kubernetes Engine (GKE) with network policy support enabled. I created an nginx deployment and load balancer for it: kubectl run nginx --image=nginx kubectl expose deployment nginx --port=80 --type=LoadBalancer Then I created this network policy to make sure other pods in the cluster won't be able to connect to it anymore: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: access-nginx spec: podSelector: matchLabels: run: nginx

Kubernetes NetworkPolicy allow loadbalancer

半城伤御伤魂 提交于 2020-02-20 09:07:46
问题 I have a Kubernetes cluster running on Google Kubernetes Engine (GKE) with network policy support enabled. I created an nginx deployment and load balancer for it: kubectl run nginx --image=nginx kubectl expose deployment nginx --port=80 --type=LoadBalancer Then I created this network policy to make sure other pods in the cluster won't be able to connect to it anymore: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: access-nginx spec: podSelector: matchLabels: run: nginx

Kubernetes NetworkPolicy allow loadbalancer

南楼画角 提交于 2020-02-20 09:06:27
问题 I have a Kubernetes cluster running on Google Kubernetes Engine (GKE) with network policy support enabled. I created an nginx deployment and load balancer for it: kubectl run nginx --image=nginx kubectl expose deployment nginx --port=80 --type=LoadBalancer Then I created this network policy to make sure other pods in the cluster won't be able to connect to it anymore: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: access-nginx spec: podSelector: matchLabels: run: nginx

GKE master node

妖精的绣舞 提交于 2020-02-03 09:16:36
问题 In GKE, when we create nodes, there will be a master node and many worker nodes will be created. I have a doubt whether master node is the one among which we created(replicas mentioned) or GKE creates master node separately. And What is the topology(eg.,mesh,star) in which gke cluster is formed ? 回答1: In GKE, if you create a standard cluster you will have API access to one master node, if you create regional cluster you will have three master node but you will access then in One endpoint (one