google-kubernetes-engine

How do I create a persistent volume claim with ReadWriteMany in GKE?

為{幸葍}努か 提交于 2020-02-03 05:07:09
问题 What is the best way to create a persistent volume claim with ReadWriteMany attaching the volume to multiple pods? Based off the support table in https://kubernetes.io/docs/concepts/storage/persistent-volumes, GCEPersistentDisk does not support ReadWriteMany natively. What is the best approach when working in the GCP GKE world? Should I be using a clustered file system such as CephFS or Glusterfs? Are there recommendations on what I should be using that is production ready? I was able to get

How can I increase the size of master node on google kubernetes engine?

萝らか妹 提交于 2020-02-02 04:07:02
问题 I'm looking for a way to increase the master node VM size on GKE. On https://kubernetes.io/docs/admin/cluster-large/#size-of-master-and-master-components it suggests that for a cluster of 11-100 nodes we should be using an n1-standard-4 VM for Kubernetes master. However, since the cluster has started out smaller, and since grown to this size, does that mean that we're stuck with an underpowered master node? From the above link: Note that these master node sizes are currently only set at

How to preserve source IP from traffic arriving on a ClusterIP service with an external IP?

纵饮孤独 提交于 2020-02-01 04:56:25
问题 I currently have a service that looks like this: apiVersion: v1 kind: Service metadata: name: httpd spec: ports: - port: 80 targetPort: 80 name: http protocol: TCP - port: 443 targetPort: 443 name: https protocol: TCP selector: app: httpd externalIPs: - 10.128.0.2 # VM's internal IP I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1 , which is most definitely an internal IP – even when I connect to

How to clean-up old unused Kubernetes images/tags?

佐手、 提交于 2020-01-31 04:40:47
问题 To simplify deployment and short term roll-back, it's useful to use a new Docker image tag for each new version to deploy on Kubernetes . Without clean-up this means that old images:tags are kept forever. How can I list all image:tag that are used by a Kubernetes container so that I can find all old image:tag that are old and not used to delete them automatically from the Docker Registry ? My goal is ideally for Google Container Engine (GKE) to delete unused images a Google Container Registry

How to clean-up old unused Kubernetes images/tags?

本小妞迷上赌 提交于 2020-01-31 04:39:57
问题 To simplify deployment and short term roll-back, it's useful to use a new Docker image tag for each new version to deploy on Kubernetes . Without clean-up this means that old images:tags are kept forever. How can I list all image:tag that are used by a Kubernetes container so that I can find all old image:tag that are old and not used to delete them automatically from the Docker Registry ? My goal is ideally for Google Container Engine (GKE) to delete unused images a Google Container Registry

Using socket.io on GKE with nginx ingress

拈花ヽ惹草 提交于 2020-01-25 10:18:13
问题 I'm trying to integrate socket.io into an application deployed on Google Kubernetes Engine. Developing locally, everything works great. But once deployed, I am continuously getting the dreaded 400 response when my sockets try to connect on. I've been searching on SO and other sites for a few days now and I haven't found anything that fixes my issue. Unfortunately this architecture was set up by a developer who is no longer at our company, and I'm certainly not a Kubernetes or GKE expert, so I

How to access postgresql pod/service in google kubernetes

假如想象 提交于 2020-01-25 08:53:05
问题 I am deploying a simple web application. I divided it into 3 pods:front end, back end, and postgres db. I successfully deployed my front end and back end to google kubernetes service and they works as expected. But for my postgresql db server, I used the following yamls. The postgres image is created by me using standard postgres images from dockerhub. I created some tables, and inserted some data and pushed to DockerHub. My backend is not able to make connection to my db. I think I might

Google Kubernetes engine inter-cluster session affinity(Sticky Session)

会有一股神秘感。 提交于 2020-01-25 08:52:10
问题 The situation is that I have 2 applications: A and B that are in the same namespace of a cluster on gke. A is on 1 pod and B is on 2 pods. Everytime a client communicates with our service. It connects first on A with websockets. A then sends http request to B. Since there is 2 pods of B, I would like to have session affinity between the Client from outside and with my application B so that everytime a client connects to A, it will always process his requests through the same pod of B. Every

GKE - Unable to make cuda work with pytorch

与世无争的帅哥 提交于 2020-01-25 00:20:06
问题 I have setup a kubernetes node with a nvidia tesla k80 and followed this tutorial to try to run a pytorch docker image with nvidia drivers and cuda drivers working. My nvidia drivers and cuda drivers are all accessible inside my pod at /usr/local : $> ls /usr/local bin cuda cuda-10.0 etc games include lib man nvidia sbin share src And my GPU is also recongnized by my image nvidia/cuda:10.0-runtime-ubuntu18.04 : $> /usr/local/nvidia/bin/nvidia-smi Fri Nov 8 16:24:35 2019 +---------------------

Deploying an app to gke from CI

房东的猫 提交于 2020-01-24 23:11:25
问题 I use gitlab for my CI, they host it and i have my own runners. I have a k8s cluster running in gke. I want to use kubectl apply to deploy new versions of my containers. This all works from my local machine because it uses my google account. I tried setting this all up as suggested by k8s and gitlab 1. copy over the ca.crt 2. copy over the token - echo "$KUBE_CA_PEM" > kube_ca.pem - kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem" -