kubernetes-helm

How to expose sentry on subpath in nginx ingress?

倖福魔咒の 提交于 2021-01-29 09:47:26
问题 I have Sentry running in my cluster and i want to expose it on subpath using nginx ingress but it seems that it only works on root path, i tried several ways but it didn't work. Is there any configuration i can perform in order to make it work on a subpath because i've seen some examples using these two variables in the sentry.conf.py file: SENTRY_URL_PREFIX = '/sentry' FORCE_SCRIPT_NAME = '/sentry' But i don't know if it works Here is the ingress resource for sentry : apiVersion: networking

Helm: variables in values.yaml

余生颓废 提交于 2021-01-29 05:14:37
问题 I have a need to use variables inside of values.yaml file: app: version: 1.0 my_app1: tag: {{ .app.version }} <- version taken from appVersion. In my case tag == version Any help will be really appreciated. 回答1: app: version: 1.0 my_app1: tag: {{ .Values.app.version }} {{ .Values.app | first | default .Values.app.version }} You can also try this EDIT - 2 {{- range $key, $value := .Values.app }} {{ $key }}: {{ $value }} {{- end }} 来源: https://stackoverflow.com/questions/56685449/helm-variables

why kubernete pod reports `Insufficient memory` even if there are free memory on the host?

亡梦爱人 提交于 2021-01-28 20:01:07
问题 I am running minikube v1.15.1 on MacOS and installed helm v3.4.1 . I run helm install elasticsearch elastic/elasticsearch --set resources.requests.memory=2Gi --set resources.limits.memory=4Gi --set replicas=1 to install elasticsearch on k8s cluster. The pod elasticsearch-master-0 is deployed but it is in pending status. When I run kubectl describe pod elasticsearch-master-0 it gives me below warning: Warning FailedScheduling 61s (x2 over 2m30s) default-scheduler 0/1 nodes are available: 1

Kubernetes customer subdomain dynamic binding

一笑奈何 提交于 2021-01-28 14:54:01
问题 I have a following use case: Our customers frequently release new services on their K8s clusters. These new services are reachable from the outside world through a load balancing and Ingress to dynamically configure this load balancing once a service is deployed. This makes it really easy for the development teams of our customers because they don’t have to wait until somebody configures a load balancing manually. They can just create their own Ingress resource next to their service

Kubernetes customer subdomain dynamic binding

北城余情 提交于 2021-01-28 14:52:41
问题 I have a following use case: Our customers frequently release new services on their K8s clusters. These new services are reachable from the outside world through a load balancing and Ingress to dynamically configure this load balancing once a service is deployed. This makes it really easy for the development teams of our customers because they don’t have to wait until somebody configures a load balancing manually. They can just create their own Ingress resource next to their service

How to configure custom LDAP in Grafana helm chart?

巧了我就是萌 提交于 2021-01-28 10:48:02
问题 I'm a newbie at Kubernetes and Helm, trying to customise stable/grafana Helm chart (https://github.com/helm/charts/tree/master/stable/grafana) with my own LDAP. What's the difference between auth.ldap part of grafana.ini and ldap section of chart's values.yaml file? How can I configure LDAP host address and credentials? 回答1: To enable LDAP configuration on Grafana. You need to update both parts. In values.yaml, there are two sections of grafana.ini and ldap . To enable LDAP you need to update

How to enable persistence in helm prometheus-operator

筅森魡賤 提交于 2021-01-27 05:36:07
问题 I am using the prometheus-operator helm chart. I want the data in prometheus server to persist. But open restart of the prometheus StatefulSet , the data disappears. When inspecting the yaml definitions of the associated StatefulSet and Pod objects, there is no PersistVolumeClaim . I tried the following change to values.yaml , per the docs in https://github.com/helm/charts/tree/master/stable/prometheus: prometheus: server: persistentVolume: enabled: true but this has no effect on the end

helm init failed is not a valid chart repository or cannot be reached: Failed to fetch 403 Forbidden

前提是你 提交于 2021-01-26 19:41:58
问题 is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden helm init start failing today, we are using helm version HELM_VERSION: v2.13.0 in our CI/CD. Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage

How to add helm repo from an existing github project?

时光总嘲笑我的痴心妄想 提交于 2021-01-24 07:45:47
问题 I have an existing github project. I want to create/add a helm folder to the project to store the helm yaml files. I want to reference this github project/folder to act like a helm repo in my local/dev environment. I know I can add the charts to my local/default helm repo. The use case is if another developer checks out the code in github and he needs to work on the charts then he can run helm install directly from the working folder. The helm.sh website has instructions of adding a gh-pages

Add custom scrape endpoints in helm chart kube-prometheus-stack deployment

不羁岁月 提交于 2021-01-23 06:50:31
问题 First off a little new to using helm... So I'm struggling to get the helm deployment of this: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack To work the way I would like in my kubernetes cluster. I like what it has done so far but how can I make it scrape a custom endpoint? I have seen this: https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus Under the section titled: " Scraping Pod Metrics via Annotations ". I have added