google-kubernetes-engine

kubernetes Deployment. how to change container environment variables for rolling updates?

天涯浪子 提交于 2019-12-04 19:23:37
Below is how I am using kunbernetes on google. I have one node application let's say Book-portal . node app is using environment variables for configurations . Step1: I created docker file and pushed gcr.io/<project-id>/book-portal:v1 Step2: deployed with following commands kubectl run book-portal --image=gcr.io/<project-id>/book-portal:v1 --port=5555 --env ENV_VAR_KEY1=value1 --env ENV_VAR_KEY2=value2 --env ENV_VAR_KEY3=value3 Step3: kubectl expose deployment book-portal --type="LoadBalancer" Step4: Get public ip with kubectl get services book-portal now assume I added new features and new

import mysql data to kubernetes pod

╄→гoц情女王★ 提交于 2019-12-04 19:08:48
Does anyone know how to import the data inside my dump.sql file to a kubernetes pod either; Directly,same way as you dealing with docker containers: docker exec -i container_name mysql -uroot --password=secret database < Dump.sql Or using the data stored in an existing docker container volume and pass it to the pod . Just if other people are searching for this : kubectl -n namespace exec -i my_sql_pod_name -- mysql -u user -ppassword < my_local_dump.sql To answer your specific question: You can kubectl exec into your container in order to run commands inside it. You may need to first ensure

How to access the Kubernetes API in Go and run kubectl commands

不问归期 提交于 2019-12-04 19:07:56
I want to access my Kubernetes cluster API in Go to run kubectl command to get available namespaces in my k8s cluster which is running on google cloud. My sole purpose is to get namespaces available in my cluster by running kubectl command: kindly let me know if there is any alternative. You can start with kubernetes/client-go , the Go client for Kubernetes, made for talking to a kubernetes cluster. (not through kubectl though: directly through the Kubernetes API) It includes a NamespaceLister , which helps list Namespaces . See " Building stuff with the Kubernetes API — Using Go " from

Distributed Programming on Google Cloud Engine using Python (mpi4py)

大憨熊 提交于 2019-12-04 19:02:38
I want to do distributed programming with python using the mpi4py package. For testing reasons, I set up a 5-node cluster via Google container engine, and changed my code accordingly. But now, what are my next steps? How do I get my code running and working on all 5 VMs? I tried to just ssh-connect into one VM from my cluster and run the code, but it was obvious that the code was not getting distributed, but instead stayed on the same machine :( [see example below] . Code: from mpi4py import MPI size = MPI.COMM_WORLD.Get_size() rank = MPI.COMM_WORLD.Get_rank() name = MPI.Get_processor_name()

How to get ssl on a kubernetes application?

狂风中的少年 提交于 2019-12-04 18:14:35
问题 I have a simple meteor app deployed on kubernetes. I associated an external IP address with the server, so that it's accessible from within the cluster. Now, I am up to exposing it to the internet and securing it (using HTTPS protocol). Can anyone give simple instructions for this section? 回答1: In my opinion kube-lego is the best solution for GKE. See why: Uses Let's Encrypt as a CA Fully automated enrollment and renewals Minimal configuration in a single ConfigMap object Works with nginx

Setting cache-control headers for nginx ingress controller on kubernetes GKE

纵然是瞬间 提交于 2019-12-04 17:57:59
I have an ingress-nginx controller handling traffic to my Kubernetes cluster hosted on GKE. I set it up using helm installation instructions from docs: Docs here For the most part everything is working, but if I try to set cache related parameters via a server-snippet annotation, all of the served content that should get the cache-control headers comes back as a 404 . Here's my ingress-service.yaml file: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-read-timeout: "4000" nginx

How to set a minimum scale for Cloud Run on GKE services?

落爺英雄遲暮 提交于 2019-12-04 17:32:17
I'm using Cloud Run on Google Kubernetes Engine and I'm able to deploy and access services without a problem. But since I'm running on GKE and paying for the cluster 24/7 it makes no sense to scale a deployment to zero and always have a cold start for the first request. I've found that's it's possible to set minScale for the Knative autoscaler to disable scale to zero here , here and here , but I have no idea where to put it. There are a lot of configurations, services and workloads inside GKE for Isito and Knative-Serving, but I couldn't find anything matching. Which file or configuration do

Allowing access to a PersistentVolumeClaim to non-root user

房东的猫 提交于 2019-12-04 17:17:31
问题 In kubernetes I can use a PersistentVolumeClaim to create some storage, which I can later mount in some container. However if the user in the container is not root, that user will not be able to access that directory because it is owned by root. What is the right way to access such a volume? (I did not find any user/permission options both when creating and mounting that volume.) 回答1: First, find out the UID number your process is running as. Then you can tell Kubernetes to chown (sort of)

Set value in dependency of Helm chart

陌路散爱 提交于 2019-12-04 14:18:29
I want to use the postgresql chart as a requirements for my Helm chart. My requirements.yaml file hence looks like this: dependencies: - name: "postgresql" version: "3.10.0" repository: "@stable" In the postgreSQL Helm chart I now want to set the username with the property postgresqlUsername (see https://github.com/helm/charts/tree/master/stable/postgresql for all properties). Where do I have to specify this property in my project so that it gets propagated to the postgreSQL dependency? This topic is very clearly described here: https://helm.sh/docs/developing_charts/#using-the-child-parent

How can you reuse dynamically provisioned PersistentVolumes with Helm on GKE?

寵の児 提交于 2019-12-04 13:42:33
问题 I am trying to deploy a helm chart which uses PersistentVolumeClaim and StorageClass to dynamically provision the required sotrage. This works as expected, but I can't find any configuration which allows a workflow like helm delete xxx # Make some changes and repackage chart helm install --replace xxx I don't want to run the release constantly, and I want to reuse the storage in deployments in the future. Setting the storage class to reclaimPolicy: Retain keeps the disks, but helm will delete