kubernetes-helm

Deploying Helm workloads with Terraform on GKE cluster

假装没事ソ 提交于 2021-02-19 06:19:06
问题 I am trying to use Terraform Helm provider (https://www.terraform.io/docs/providers/helm/index.html) to deploy a workload to GKE cluster. I am more or less following Google's example - https://github.com/GoogleCloudPlatform/terraform-google-examples/blob/master/example-gke-k8s-helm/helm.tf, but I do want to use RBAC by creating the service account manually. My helm.tf looks like this: variable "helm_version" { default = "v2.13.1" } data "google_client_config" "current" {} provider "helm" {

Helm + Kubernetes, load and enable extensions or modules in PHP

六月ゝ 毕业季﹏ 提交于 2021-02-19 05:59:05
问题 i've problem when i run a php deployment with kubernetes becouse don't load the modules or extensions libraries. My deployment file is this: apiVersion: apps/v1 kind: Deployment metadata: name: php labels: app: php spec: selector: matchLabels: app: php replicas: 1 template: metadata: labels: app: php spec: containers: - name: php image: php:7-fpm env: - name: PHP_INI_SCAN_DIR value: :/usr/local/etc/php/conf.custom ports: - containerPort: 9000 lifecycle: postStart: exec: command: ["/bin/sh","

Why tiller connect to localhost 8080 for kubernetes api?

我只是一个虾纸丫 提交于 2021-02-19 01:20:11
问题 When use helm for kubernetes package management, after installed the helm client, after helm init I can see tiller pods are running on kubernetes cluster, and then when I run helm ls , it gives an error: Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labe lSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection refused and use kubectl logs I can see similar message like: [storage/driver] 2017/08/28 08:08:48 list: failed to list: Get http://localhost

Helm install, Kubernetes - how to wait for the pods to be ready?

烈酒焚心 提交于 2021-02-16 06:33:05
问题 I am creating a CI/CD pipeline. I run helm install --wait --timeout 300 ... . But that doesn't really wait, just returns when the "release" status is DEPLOYED . So then I see a few things in kubectl get pods --namespace default -l 'release=${TAG}' -o yaml that could be used: - kind: Pod status: conditions: - lastProbeTime: null lastTransitionTime: 2018-05-11T00:30:46Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2018-05-11T00:30:48Z status: "True" type: Ready So

How to get Pod CPU and Memory Usage from metrics-server?

大兔子大兔子 提交于 2021-02-11 14:51:49
问题 I currently have metric server installed and running in my K8s cluster. Utilizing the the kubernetes python lib, I am able to make this request to get pod metrics: from kubernetes import client api_client = client.ApiClient() ret_metrics = api_client.call_api( '/apis/metrics.k8s.io/v1beta1/namespaces/' + 'default' + '/pods', 'GET', auth_settings=['BearerToken'], response_type='json', _preload_content=False) response = ret_metrics[0].data.decode('utf-8') print('RESP', json.loads(response)) In

how can I iteratively create pods from list using Helm?

半城伤御伤魂 提交于 2021-02-10 18:02:34
问题 I'm trying to create a number of pods from a yaml loop in helm. if I run with --debug --dry-run the output matches my expectations, but when I actually deploy to to a cluster, only the last iteration of the loop is present. some yaml for you: {{ if .Values.componentTests }} {{- range .Values.componentTests }} apiVersion: v1 kind: Pod metadata: name: {{ . }} labels: app: {{ . }} chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }} release: {{ $.Release.Name }} heritage: {{ $

Automating wildcard subdomain support for Kubernetes using Helm operator

℡╲_俬逩灬. 提交于 2021-02-10 06:50:36
问题 Here is my use case: We have a customer, where each of their services has to be available on dedicated subdomain. Naming convention should be service-name.customerdomain.com , where service-name is the deployed service and customerdomain.com is the customer domain. When a new service is created, it should be available automatically , i.e. once service-name service is deployed into the cluster, it has to be available on service-name.customerdomain.com . I know, this can be achieved manually by

Alternative for .Release.Time in Helm v3

依然范特西╮ 提交于 2021-02-08 15:16:33
问题 Since Helm v3 built-in object .Release.Time is removed. Was is preferred way of injecting release time into a template now? 回答1: It looks like one of the sprig date functions is the way to go now. For example: metadata: annotations: timestamp: {{ now | quote }} 来源: https://stackoverflow.com/questions/61140638/alternative-for-release-time-in-helm-v3

Append release timestamp to helm template name

心不动则不痛 提交于 2021-02-08 12:33:47
问题 I'm struggling with finding a way to include the Release.Time builtin as part of a helm name. If I just include it as: name: {{ template "myapp.name" . }}-{{ .Release.Time }} Dry run shows this: name: myapp-seconds:1534946206 nanos:143228281 It seems like this is a *timestamp.Timestamp object or something because {{ .Release.Time | trimPrefix "seconds:" | trunc 10 }} outputs wrong type for value; expected string; got *timestamp.Timestamp I can hack the string parsing by doing: {{ .Release

Kubernetes Helm: not a valid chart repository

别说谁变了你拦得住时间么 提交于 2021-02-07 19:56:29
问题 According to cert-manager installation docs jetstack repository should be added: $ helm repo add jetstack https://charts.jetstack.io It gives error message: Error: looks like "https://charts.jetstack.io" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: unknown field "serverInfo" What are the ways to fix the issue? 回答1: This looks to be caused by a patch done in Version 3.3.2 of Helm for security based issues. Reference Issue: https:/