google-kubernetes-engine

How to reduce CPU limits of kubernetes system resources?

可紊 提交于 2019-11-27 14:46:10
问题 I'd like to keep the number of cores in my GKE cluster below 3. This becomes much more feasible if the CPU limits of the K8s replication controllers and pods are reduced from 100m to at most 50m. Otherwise, the K8s pods alone take 70% of one core. I decided against increasing the CPU power of a node. This would be conceptually wrong in my opinion because the CPU limit is defined to be measured in cores. Instead, I did the following: replacing limitranges/limits with a version with "50m" as

Kubernetes Ingress (GCE) keeps returning 502 error

岁酱吖の 提交于 2019-11-27 13:38:18
问题 I am trying to setup an Ingress in GCE Kubernetes. But when I visit the IP address and path combination defined in the Ingress, I keep getting the following 502 error: Here is what I get when I run: kubectl describe ing --namespace dpl-staging Name: dpl-identity Namespace: dpl-staging Address: 35.186.221.153 Default backend: default-http-backend:80 (10.0.8.5:8080) TLS: dpl-identity terminates Rules: Host Path Backends ---- ---- -------- * /api/identity/* dpl-identity:4000 (<none>) Annotations

How to call a service exposed by a Kubernetes cluster from another Kubernetes cluster in same project

混江龙づ霸主 提交于 2019-11-27 13:38:18
I have two service, S1 in cluster K1 and S2 in cluster K2. They have different hardware requirements. Service S1 needs to talk to S2. I don't want to expose Public IP for S2 due to security reasons. Using NodePorts on K2 cluster's compute instances with network load-balancing takes the flexibility out as I would have to add/remove K2's compute instances in target pool each time a node is added/removed in K2. Is there something like "service-selector" for automatically updating target-pool? If not, is there any other better approach for this use-case? I can think of a couple of ways to access

Define size for /dev/shm on container engine

随声附和 提交于 2019-11-27 10:53:07
问题 I'm running Chrome with xvfb on Debian 8. It works until I open a tab and try to load content. The process dies silently... Fortunately, I have gotten it to run smoothly on my local docker using docker run --shm-size=1G . There is a known bug in Chrome that causes it to crash when /dev/shm is too small. I am deploying to Container engine, and inspecting the OS specs. The host OS has a solid 7G mounted to /dev/shm, but the actual container is only allocated 64M. Chrome crashes. How can I set

How can I trigger a Kubernetes Scheduled Job manually?

て烟熏妆下的殇ゞ 提交于 2019-11-27 09:43:14
问题 I've created a Kubernetes Scheduled Job, which runs twice a day according to its schedule. However, I would like to trigger it manually for testing purposes. How can I do this? 回答1: The issue #47538 that @jdf mentioned is now closed and this is now possible. The original implementation can be found here but the syntax has changed. With kubectl v1.10.1+ the command is: kubectl create job --from=cronjob/<cronjob-name> <job-name> It seems to be backwardly compatible with older clusters as it

Is it necessary to recreate a Google Container Engine cluster to modify API permissions?

三世轮回 提交于 2019-11-27 08:16:58
问题 After reading this earlier question, I have some follow-up questions. I have a Google Container Engine cluster which lacks the Cloud Monitoring API Access permission. According to this post I cannot enable it. The referenced post is one year old. Just to be sure: Is it still correct? To enable (for example) the Cloud Monitoring API for my GKE cluster, we would have to recreate the entire cluster because there is no way to change these permissions after cluster creation? Also, if I have to do

How can I keep a container running on Kubernetes?

左心房为你撑大大i 提交于 2019-11-27 06:17:40
I'm now trying to run a simple container with shell (/bin/bash) on a Kubernetes cluster. I thought that there was a way to keep a container running on a Docker container by using pseudo-tty and detach option ( -td option on docker run command). For example, $ sudo docker run -td ubuntu:latest Is there an option like this in Kubernetes? I've tried running a container by using a kubectl run-container command like: kubectl run-container test_container ubuntu:latest --replicas=1 But the container exits for a few seconds (just like launching with the docker run command without options I mentioned

Static outgoing IP in Kubernetes

大憨熊 提交于 2019-11-27 05:32:40
问题 I run a k8s cluster in google cloud (GKE) and a MySQL server in aws (RDS). Pods need to connect to RDS which only allows connections from certain IP. How can I configure outgoing traffic to have a static IP? 回答1: I had the same problem to connect to a sftp server from a Pod. To solve this, first you need to create an external IP address: gcloud compute addresses create {{ EXT_ADDRESS_NAME }} --region {{ REGION }} Then, I suppose that your pod is assigned to your default-pool node cluster.

Is it possible to use 1 Kubernetes ingress object to route traffic to k8s services in different clusters?

懵懂的女人 提交于 2019-11-27 02:58:56
问题 I have the following setup: k8s cluster A, containing service SA k8s cluster B, containing service SB, and an HTTP ingress that routes traffic to SB Is it possible to add service SA as the backend service for one of the path of the ingress? If so, how do I refer to it in the ingress configuration file? (using selectors in the usual way doesn't work, presumably because we are in different clusters) 回答1: Ingress objects help configure HTTP(S) load balancing for a single cluster. They don't have

Kubernetes python client: authentication issue

时光总嘲笑我的痴心妄想 提交于 2019-11-27 02:55:09
问题 We are using the kubernetes python client (4.0.0) in combination with google's kubernetes engine (master + nodepools run k8s 1.8.4) to periodically schedule workloads on kubernetes. The simplified version of the script we use to creates the pod, attach to the the logs and report the end status of the pod looks as follows: config.load_kube_config(persist_config=False) v1 = client.CoreV1Api() v1.create_namespaced_pod(body=pod_specs_dict, namespace=args.namespace) logging_response = v1.read