google-kubernetes-engine

Deploying Helm workloads with Terraform on GKE cluster

假装没事ソ 提交于 2021-02-19 06:19:06
问题 I am trying to use Terraform Helm provider (https://www.terraform.io/docs/providers/helm/index.html) to deploy a workload to GKE cluster. I am more or less following Google's example - https://github.com/GoogleCloudPlatform/terraform-google-examples/blob/master/example-gke-k8s-helm/helm.tf, but I do want to use RBAC by creating the service account manually. My helm.tf looks like this: variable "helm_version" { default = "v2.13.1" } data "google_client_config" "current" {} provider "helm" {

Kubernetes Scaling up pods by time based trigger

為{幸葍}努か 提交于 2021-02-19 04:14:58
问题 I have a server running on Kubernetes to handle hourly processing jobs. Thinking of using a service to expose the pods, and using an (external) cron job to hit the load balancer so that kubernetes can autoscale to handle the higher load as required. However in implementation, if the cron job sends, say, 100 requests at the same time while there's only 1 pod, all the traffic will go to that pod whereas subsequently spun up pods will still not have any traffic to handle. How can I get around

What is the default memory allocated for a pod

旧巷老猫 提交于 2021-02-19 03:23:32
问题 I am setting up a pod say test-pod on my google kubernetes engine. When I deploy the pod and see in workloads using google console, I am able to see 100m CPU getting allocated to my pod by default, but I am not able to see how much memory my pod has consumed. The memory requested section always shows 0 there. I know we can restrict memory limits and initial allocation in the deployment YAML. But I want to know how much default memory a pod gets allocated when no values are specified through

How do I disable interactive session (kubectl exec) to a Kubernetes pod

醉酒当歌 提交于 2021-02-17 05:13:39
问题 I need to disable interactive session/ssh access to a Kubernetes pod. 回答1: It’s controlled via the RBAC system, via the pods/exec subresource. You can set up your policies however you want. 来源: https://stackoverflow.com/questions/60756423/how-do-i-disable-interactive-session-kubectl-exec-to-a-kubernetes-pod

How do I disable interactive session (kubectl exec) to a Kubernetes pod

喜欢而已 提交于 2021-02-17 05:13:17
问题 I need to disable interactive session/ssh access to a Kubernetes pod. 回答1: It’s controlled via the RBAC system, via the pods/exec subresource. You can set up your policies however you want. 来源: https://stackoverflow.com/questions/60756423/how-do-i-disable-interactive-session-kubectl-exec-to-a-kubernetes-pod

How to ssh into a Traefik pod?

冷暖自知 提交于 2021-02-16 14:52:38
问题 I am using GKE. I've launched the following traefik deployment through kubectl: https://github.com/containous/traefik/blob/master/examples/k8s/traefik-deployment.yaml The pod runs on the kube-system namespace. I'm not able to ssh into the pod. kubectl get po -n kube-system traefik-ingress-controller-5bf599f65d-fl9gx 1/1 Running 0 30m kubectl exec -it traefik-ingress-controller-5bf599f65d-fl9gx -n kube-system -- '\bin\bash' rpc error: code = 2 desc = oci runtime error: exec failed: container

GKE with Aws worker nodes

巧了我就是萌 提交于 2021-02-11 14:35:29
问题 Is it possible to add aws ec2 nodes to GKE cluster as worker nodes? I created a cluster( named as "mycluster" ) in GKE with the three nodes, now i want to add aws ec2 instance to mycluster as a worker node. Is that possible to add to the existing cluster? please help me on this issue. 回答1: It might be technically possible (if you figured out how to give a kubelet on the AWS node credentials to join the cluster) but it isn't really a great idea for a couple of reasons: Kubernetes is designed

Migration of GKE from Default to Shared VPC and Public to Private GKE CLuster

走远了吗. 提交于 2021-02-11 14:31:27
问题 Few queries on GKE We have few GKE CLusters running on Default VPC . Can we migrate these clusters to use SharedVPC or atleast Custom VPC ? It seems existing clusters with default VPC mode cannot be changed to SharedVPC model as per GCP documentation but can we convert to Custom VPC from default VPC How to migrate from Custom VPC to Shared VPC ? Is it creating a new Cluster from existing Cluster and select SharedVPC in networking section for new cluster and then copy the Kubernetes resources

Debugging Kubernetes/GKE timeout issue while creating ingress with ingress-nginx

流过昼夜 提交于 2021-02-11 13:54:42
问题 Using ingress-nginx v0.30 in the GKE cluster has no issue creating the ingress using kubectl apply -f command. After upgrading to ingress-nginx v0.31.1 , the following error has been shown: Error from server (Timeout): error when creating "kubernetes/ingress.yaml": Timeout: request did not complete within requested timeout 30s Questions: How to debug the timeout of this request? There is no connection issue same ingress file works on v0.30 Stackdriver shows no clue Any way to increase the

Failed to provision volume with StorageClass “slow”: Failed to get GCE GCECloudProvider with error <nil>

好久不见. 提交于 2021-02-11 13:29:00
问题 I'm trying to install Redis cluster (StatefulSet) out of GKE and when getting pvc I've got Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 10s persistentvolume-controller Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error <nil> Already added "--cloud-provider=gce" on files /etc/kubernetes/manifests/kube-controller-manager.yaml and sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml. Restarted but