google-kubernetes-engine

Kubernetes pods failing on “Pod sandbox changed, it will be killed and re-created”

守給你的承諾、 提交于 2019-11-29 13:32:22
On a Google Container Engine cluster (GKE), I see sometimes a pod (or more) not starting and looking in its events, I can see the following Pod sandbox changed, it will be killed and re-created. If I wait - it just keeps re-trying. If I delete the pod, and allow them to be recreated by the Deployment's Replica Set, it will start properly. The behavior is inconsistent. Kubernetes versions 1.7.6 and 1.7.8 Any ideas? I can see following message posted in Google Cloud Status Dashboard : "We are investigating an issue affecting Google Container Engine (GKE) clusters where after docker crashes or is

Creating image pull secret for google container registry that doesn't expire?

风格不统一 提交于 2019-11-29 11:45:05
问题 I'm trying to get Kubernetes to download images from a Google Container Registry from another project. According to the docs you should create an image pull secret using: $ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL But I wonder what DOCKER_USER and DOCKER_PASSWORD I should use for authenticating with Google Container Registry? Looking at the GCR docs it

ingress-nginx - create one ingress per host? Or combine many hosts into one ingress and reload?

独自空忆成欢 提交于 2019-11-29 07:55:39
I'm building a service where users can build web apps - these apps will be hosted under a virtual DNS name *.laska.io For example, if Tom and Jerry both built an app, they'd have it hosted under: tom.laska.io jerry.laska.io Now, suppose I have 1000 users. Should I create one big ingress that looks like this? apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - host: tom.laska.io http: paths: - backend: serviceName: nginx-service servicePort: 80 - host: jerry

Access Kubernetes GKE cluster outside of GKE cluster with client-go?

廉价感情. 提交于 2019-11-29 05:02:25
I have multiple kubernetes clusters running on GKE (let's say clusterA and clusterB) I want to access both of those clusters from client-go in an app that is running in one of those clusters (e.g. access clusterB from an app that is running on clusterA) I general for authenticating with kubernetes clusters from client-go I see that I have two options: InCluster config or from kube config file So it is easy to access clusterA from clusterA but not clusterB from clusterA. What are my options here? It seems that I just cannot pass GOOGLE_APPLICATION_CREDENTIALS and hope that client-go will take

How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes

左心房为你撑大大i 提交于 2019-11-29 04:06:12
with the help of kubernetes I am running daily jobs on GKE, On a daily basis based on cron configured in kubernetes a new container spins up and try to insert some data into BigQuery. The setup that we have is we have 2 different projects in GCP in one project we maintain the data in BigQuery in other project we have all the GKE running so when GKE has to interact with different project resource my guess is I have to set an environment variable with name GOOGLE_APPLICATION_CREDENTIALS which points to a service account json file, but since every day kubernetes is spinning up a new container I

Kubernetes NodePort Custom Port

…衆ロ難τιáo~ 提交于 2019-11-29 02:57:54
Is there way to specify a custom NodePort port in a kubernetes service YAML definition? I need to be able to define the port explicitly in my configuration file. You can set the type NodePort in your Service Deployment. Note that there is a Node Port Range configured for your API server with the option --service-node-port-range (by default 30000-32767 ). You can also specify a port in that range specifically by setting the nodePort attribute under the Port object, or the system will chose a port in that range for you. So a Service example with specified NodePort would look like this:

Cluster autoscaler not downscaling

限于喜欢 提交于 2019-11-29 02:49:52
问题 I have a regional cluster set up in google kubernetes engine (GKE) . The node group is a single vm in each region (3 total) . I have a deployment with 3 replicas minimum controlled by a HPA. The nodegroup is configured to be autoscaling (cluster autoscaling aka CA). The problem scenario: Update deployment image. Kubernetes automatically creates new pods and the CA identifies that a new node is needed. I now have 4. The old pods get removed when all new pods have started, which means I have

How to reduce CPU limits of kubernetes system resources?

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-28 23:31:27
I'd like to keep the number of cores in my GKE cluster below 3. This becomes much more feasible if the CPU limits of the K8s replication controllers and pods are reduced from 100m to at most 50m. Otherwise, the K8s pods alone take 70% of one core. I decided against increasing the CPU power of a node. This would be conceptually wrong in my opinion because the CPU limit is defined to be measured in cores. Instead, I did the following: replacing limitranges/limits with a version with "50m" as default CPU limit (not necessary, but in my opinion cleaner) patching all replication controller in the

Kubernetes Ingress (GCE) keeps returning 502 error

。_饼干妹妹 提交于 2019-11-28 23:17:16
I am trying to setup an Ingress in GCE Kubernetes. But when I visit the IP address and path combination defined in the Ingress, I keep getting the following 502 error: Here is what I get when I run: kubectl describe ing --namespace dpl-staging Name: dpl-identity Namespace: dpl-staging Address: 35.186.221.153 Default backend: default-http-backend:80 (10.0.8.5:8080) TLS: dpl-identity terminates Rules: Host Path Backends ---- ---- -------- * /api/identity/* dpl-identity:4000 (<none>) Annotations: https-forwarding-rule: k8s-fws-dpl-staging-dpl-identity--5fc40252fadea594 https-target-proxy: k8s-tps

How to expose NodePort to internet on GCE

南笙酒味 提交于 2019-11-28 22:26:54
问题 How can I expose service of type NodePort to internet without using type LoadBalancer ? Every resource I have found was doing it by using load balancer. But I don't want load balancing its expensive and unnecessary for my use case because I am running one instance of postgres image which is mounting to persistent disk and I would like to be able to connect to my database from my PC using pgAdmin. If it is possible could you please provide bit more detailed answer as I am new to Kubernetes,