google-kubernetes-engine

Does GKE support nginx-ingress with static ip?

穿精又带淫゛_ 提交于 2019-12-05 05:27:17
I have been using the Google Cloud ingress. Also deployed the nginx-ingress and trying to setup using static-ip address in GKE. Can we use both Google Cloud ingress and nginx-ingress in same cluster? How can we use the nginx-ingress with static-ip? Thanks GalloCedrone First question As Radek 'Goblin' Pieczonka already pointed you out it is possible to do so. I just wanted to link you to the official documentation regarding this matter: If you have multiple Ingress controllers in a single cluster, you can pick one by specifying the ingress.class annotation, eg creating an Ingress with an

not able to access kubernetes dashboard in gcloud

三世轮回 提交于 2019-12-05 04:51:46
问题 I am following the instructions as given here I used the command to get a running cluster, in gcloud console I typed: curl -sS https://get.k8s.io | bash as described in the link, after that I ran the command kubectl cluster-info from that I got: kubernetes-dashboard is running at https://35.188.109.36/api/v1/proxy/namespaces/kube- system/services/kubernetes-dashboard but when I go to that url from firefox, the message that comes is: User "system:anonymous" cannot proxy services in the

How to backup a Postgres database in Kubernetes on Google Cloud?

£可爱£侵袭症+ 提交于 2019-12-05 03:54:34
What is the best practice for backing up a Postgres database running on Google Cloud Container Engine ? My thought is working towards storing the backups in Google Cloud Storage , but I am unsure of how to connect the Disk/Pod to a Storage Bucket. I am running Postgres in a Kubernetes cluster using the following configuration: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres-deployment spec: replicas: 1 template: metadata: labels: app: postgres spec: containers: - image: postgres:9.6.2-alpine imagePullPolicy: IfNotPresent env: - name: PGDATA value: /var/lib/postgresql

Kubernetes load balancer SSL termination in google container engine?

纵饮孤独 提交于 2019-12-05 03:34:40
Background: I'm pretty new to the Google's Cloud platform so I want to make sure that I'm not is missing anything obvious. We're experimenting with GKE and Kubernetes and we'd like to expose some services over https. I've read the documentation for http(s) load-balancing which seem to suggest that you should maintain your own nginx instance that does SSL terminal and load balancing. To me this looks quite complex (I'm used to working on AWS and its load-balancer (ELB) which has supported SSL termination for ages). Questions: Is creating and maintaining an nginx instance the way to go if all

How to list the published container images in the Google Container Registry using gcloud or another CLI

人走茶凉 提交于 2019-12-05 02:25:37
Is there a gcloud API or other command line interface (CLI) to access the list of published container images in the private Google Container Registry? (That is the container registry inside a Google Cloud Platform project) gcloud container does not seem to help: $ gcloud container Usage: gcloud container [optional flags] <group | command> group may be clusters | operations command may be get-server-config Deploy and manage clusters of machines for running containers. flags: --zone ZONE, -z ZONE The compute zone (e.g. us-central1-a) for the cluster global flags: Run `gcloud -h` for a

how to use Kubernetes DNS for pods?

孤街浪徒 提交于 2019-12-05 00:37:07
On GKE, kube-dns is running on my nodes, I can see the docker containers. I do have access to Services by name, which is great for all these applications where load balancing is a perfectly suitable solution, but how would I use the DNS to access individual pods? I know I can look up specific pods in the API, but I need to update the hosts file myself, and keep watching the pod list. DNS is supposed to do that for me so how is it meant to be used within a pod? The Kubernetes doc says the DNS info needs to be passed to the kubelet but I have no access to that on GKE that I know of, so is it

Node pool does not reduce his node size to zero although autoscaling is enabled

浪尽此生 提交于 2019-12-05 00:15:19
问题 I have created two node pools. A small one for all the google system jobs and a bigger one for my tasks. The bigger one should reduce its size to 0 after the job is done. The problem is: Even if there are no cron jobs, the node pool do not reduce his size to 0. Creating cluster: gcloud beta container --project "projectXY" clusters create "cluster" --zone "europe-west3-a" --username "admin" --cluster-version "1.9.6-gke.0" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" -

Why is prometheus operator not able to start

一世执手 提交于 2019-12-04 21:51:52
I'm trying to create prometheus with operator in fresh new k8s cluster I use the following files , I’m creating a namespace monitoring Apply this file , which works ok apiVersion: apps/v1beta2 kind: Deployment metadata: labels: k8s-app: prometheus-operator name: prometheus-operator namespace: monitoring spec: replicas: 2 selector: matchLabels: k8s-app: prometheus-operator template: metadata: labels: k8s-app: prometheus-operator spec: priorityClassName: "operator-critical" tolerations: - key: "WorkGroup" operator: "Equal" value: "operator" effect: "NoSchedule" - key: "WorkGroup" operator:

Exposing two ports in Google Container Engine

眉间皱痕 提交于 2019-12-04 19:49:12
问题 Is it possible to create a Pod in the Google Container Engine where two ports are exposed: port 8080 is listening for incoming content and port 80 distributes this content to clients? The following command to create a Pod is given as example by Google: kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node --port=8080 I can't seem to define a listening port, and when adding a second "--port=" switch only one port is exposed. Is there a way to expose a second port or am I limited to

How to vertically scale google cloud instance without stopping running app

假装没事ソ 提交于 2019-12-04 19:33:54
I have a Node.js app that provides a service which cannot be interrupted. However the load to the app varies overtime and to save cost I would like the vm instance machine type to autoscale in function of the load (ie when over 80% CPU utilisation, scale up from 1 vCPU(3.75 GB memory, n1-standard-1) to 2vCPU(7.5 GB memory, n1-standard-2)) Is this possible? PS: I have looked at using the container engine and kubernetes but due to how the app operates, the app cannot be replicated to multiple pods and continue working You can only change the machine type of a stopped instance and an instance is