google-kubernetes-engine

not able to access kubernetes dashboard in gcloud

人走茶凉 提交于 2019-12-03 20:06:52
I am following the instructions as given here I used the command to get a running cluster, in gcloud console I typed: curl -sS https://get.k8s.io | bash as described in the link, after that I ran the command kubectl cluster-info from that I got: kubernetes-dashboard is running at https://35.188.109.36/api/v1/proxy/namespaces/kube- system/services/kubernetes-dashboard but when I go to that url from firefox, the message that comes is: User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched." Expected behaviour: Should ask for admin name and password to

Autoscaling in Google Container Engine

不想你离开。 提交于 2019-12-03 19:13:28
问题 I understand the Container Engine is currently on alpha and not yet complete. From the docs I assume there is no auto-scaling of pods (e.g. depending on CPU load) yet, correct? I'd love to be able to configure a replication controller to automatically add pods (and VM instances) when the average CPU load reaches a defined threshold. Is this somewhere on the near future roadmap? Or is it possible to use the Compute Engine Autoscaler for this? (if so, how?) 回答1: As we work towards a Beta

Can a Persistent Volume be resized?

你说的曾经没有我的故事 提交于 2019-12-03 18:38:29
问题 I'm running a MySQL deployment on Kubernetes however seems like my allocated space was not enough, initially I added a persistent volume of 50GB and now I'd like to expand that to 100GB . I already saw the a persistent volume claim is immutable after creation, but can I somehow just resize the persistent volume and then recreate my claim? 回答1: Yes, as of 1.11, persistent volumes can be resized on certain cloud providers. To increase volume size: Edit the PVC ( kubectl edit pvc $your_pvc ) to

Allow Privileged Containers in Kubernetes on Google Container (GKE)

时光毁灭记忆、已成空白 提交于 2019-12-03 17:11:33
问题 I am using a Kubernetes cluster deployed through Google Container Engine (GKE) from the Google Cloud Developer's Console, cluster version 0.19.3. I would like to run a privileged container, like in the Kubernetes NFS Server example: apiVersion: v1 kind: Pod metadata: name: nfs-server labels: role: nfs-server spec: containers: - name: nfs-server image: jsafrane/nfs-data ports: - name: nfs containerPort: 2049 securityContext: privileged: true Since the default Google Container Engine

Setting up a Kuberentes cluster with HTTP Load balancing ingress for RStudio and Shiny results in error pages

喜你入骨 提交于 2019-12-03 16:14:17
I'm attempting to create a cluster on Google Kubernetes Engine that runs nginx, RStudio server and two Shiny apps, following and adapting this guide . I have 4 workloads that are all green in the UI, deployed via: kubectl run nginx --image=nginx --port=80 kubectl run rstudio --image gcr.io/gcer-public/persistent-rstudio:latest --port 8787 kubectl run shiny1 --image gcr.io/gcer-public/shiny-googleauthrdemo:latest --port 3838 kubectl run shiny5 --image=flaviobarros/shiny-wordcloud --port=80 They were then all exposed as node ports via: kubectl expose deployment nginx --target-port=80 --type

Node pool does not reduce his node size to zero although autoscaling is enabled

对着背影说爱祢 提交于 2019-12-03 15:52:21
I have created two node pools. A small one for all the google system jobs and a bigger one for my tasks. The bigger one should reduce its size to 0 after the job is done. The problem is: Even if there are no cron jobs, the node pool do not reduce his size to 0. Creating cluster: gcloud beta container --project "projectXY" clusters create "cluster" --zone "europe-west3-a" --username "admin" --cluster-version "1.9.6-gke.0" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" --scopes "https://www.googleapis.com/auth/cloud-platform" --num-nodes "1" --network "default" --enable

how to bound a Persistent volume claim with a gcePersistentDisk?

老子叫甜甜 提交于 2019-12-03 13:46:36
问题 I would like to bound PersistentVolumeClaim with a gcePersistentDisk PersistentVolume. Below the steps I did for getting that: 1. Creation of the gcePersistentDisk: gcloud compute disks create --size=2GB --zone=us-east1-b gce-nfs-disk 2. Definition the PersistentVolume and the PersistentVolumeClaim # pv-pvc.yml apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: gce-nfs-disk fsType: ext4 ---

not able to perform gcloud init inside dockerfile

风格不统一 提交于 2019-12-03 12:51:11
I have made a Dockerfile for deploying my node.js application into google container engine .It looks like as below FROM node:0.12 COPY google-cloud-sdk /google-cloud-sdk RUN /google-cloud-sdk/bin/gcloud init COPY bpe /bpe CMD cd /bpe;npm start I should use gcloud init inside Dockerfile because my node.js application is using gcloud-node module for creating buckets in GCS . When i am using the above dockerfile and doing docker built it is failing with following errors sudo docker build -t gcr.io/[PROJECT_ID]/test-node:v1 . Sending build context to Docker daemon 489.3 MB Sending build context to

Exposing two ports in Google Container Engine

我们两清 提交于 2019-12-03 11:54:49
Is it possible to create a Pod in the Google Container Engine where two ports are exposed: port 8080 is listening for incoming content and port 80 distributes this content to clients? The following command to create a Pod is given as example by Google: kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node --port=8080 I can't seem to define a listening port, and when adding a second "--port=" switch only one port is exposed. Is there a way to expose a second port or am I limited to one port per container? caesarxuchao No, you cannot specify multiple ports in kubectl run . But you can

Allowing access to a PersistentVolumeClaim to non-root user

◇◆丶佛笑我妖孽 提交于 2019-12-03 11:21:11
In kubernetes I can use a PersistentVolumeClaim to create some storage, which I can later mount in some container. However if the user in the container is not root, that user will not be able to access that directory because it is owned by root. What is the right way to access such a volume? (I did not find any user/permission options both when creating and mounting that volume.) First, find out the UID number your process is running as. Then you can tell Kubernetes to chown (sort of) the mount point of the volume for your pod by adding .spec.securityContext.fsGroup : spec: ... securityContext