kubernetes-helm

Kubernetes pod unable to connect to rabbit mq instance running locally

青春壹個敷衍的年華 提交于 2019-12-29 09:22:22
问题 I am moving my application from docker to kubernetes \ helm - and so far I have been successful except for setting up incoming \ outgoing connections. One particular issue I am facing is that I am unable to connect to the rabbitmq instance running locally on my machine on another docker container. app-deployment.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jks labels: app: myapp spec: replicas: 1 template: metadata: labels: app: myapp spec: imagePullSecrets: - name:

Kubernetes pod unable to connect to rabbit mq instance running locally

允我心安 提交于 2019-12-29 09:22:14
问题 I am moving my application from docker to kubernetes \ helm - and so far I have been successful except for setting up incoming \ outgoing connections. One particular issue I am facing is that I am unable to connect to the rabbitmq instance running locally on my machine on another docker container. app-deployment.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jks labels: app: myapp spec: replicas: 1 template: metadata: labels: app: myapp spec: imagePullSecrets: - name:

Helm install in certain order

混江龙づ霸主 提交于 2019-12-29 04:16:08
问题 I am trying to create a Helm Chart with the following resources: Secret ConfigMap Service Job Deployment These are also in the order that I would like them to be deployed. I have put a hook in the Deployment so that it is post-install, but then Helm does not see it as a resource and I have to manually manage it. The Job needs the information in the Secret and ConfigMap, otherwise I would make that a pre-install hook. But I can't make everything a hook or nothing will be managed in my release.

Even after adding additional Kubernetes node, I see new node unused while getting error "No nodes are available that match all of the predicates:

不羁岁月 提交于 2019-12-25 02:26:15
问题 We tried to add one more deployment with 2 pods to existing mix of pods scheduled over 4 nodes and 1 master node cluster. We are getting following error: No nodes are available that match all of the predicates: Insufficient cpu (4), Insufficient memory (1), PodToleratesNodeTaints (2). Looking at the other threads and documentation, this would be the case when existing nodes are exceeding cpu capacity (on 4 nodes) and memory capacity(on 1 node)... To solve the resource issue, we added another

kubectl drain not evicting helm memcached pods

走远了吗. 提交于 2019-12-25 02:26:02
问题 I'm following this guide in an attempt to upgrade a kubernetes cluster on GKE with no downtime. I've gotten all the old nodes cordoned and most of the pods have been evicted, but for a couple of the nodes, kubectl drain just keeps running and not evicting any more pods. kubectl get pods --all-namespaces -o=wide shows a handful of pods still running on the old pool, and when I run kubectl drain --ignore-daemonsets --force it prints a warning explaining why it's ignoring most of them; the only

StorageClass of type local with a pvc but gives an error in kubernetes

只谈情不闲聊 提交于 2019-12-24 08:47:09
问题 i want to use local volume that is mounted on my node on path: /mnts/drive. so i created a storageclass (as shown in documentation for local storageclass), and created a PVC and a simple pod which uses that volume. so these are the configurations used: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-fast provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysampleclaim spec:

Are Dockerfiles available for Google's sample images on Google Container Registry?

纵饮孤独 提交于 2019-12-24 06:30:15
问题 I'm using the official stable ZooKeeper Helm chart for Kubernetes which pulls a ZooKeeper Docker image from Google's sample images on Google Container Registry. That ZooKeeper image is available here, however, I can't seem to find any reference to the Dockerfile for how it is built or if its Dockerfile is generated from some other representation (e.g., via Bazel). I'd like to know info like what else is installed on the image, what OS it's based on, etc. In general are Dockerfiles for the

How best to say a value is required in a helm chart?

时光毁灭记忆、已成空白 提交于 2019-12-23 15:30:00
问题 I am doing this now: value: {{ required "A valid .Values.foo entry required!" .Values.foo }} But to give this same message for all required values in the templates is cumbersome and clutters the templates in my opinion. Is there a better way where we could define it outside the template \ or a cleaner way to do it within the template itself? 回答1: You could do something by taking advantage of range and the fact that null will fail the required check. So in your values.yaml you could have this

multiline string to a variable in a helm template?

ε祈祈猫儿з 提交于 2019-12-23 13:00:35
问题 Is it possible to assign a multiline string to a variable in a helm template? I have a variable as follows: {{- $fullDescription := "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" -}} but I would prefer to keep it in my code base as {{- $fullDescription :|-

Helm upgrade doesn't pull new container

一笑奈何 提交于 2019-12-23 08:59:28
问题 I Build a simple NodeJS API, pushed the Docker Image to a repo and deployed it to my k8s with Helm install (works perfectly fine). The pullPolicy is Always . Now I want to update the source code and deploy the updated version of my app. I bumped the version in all files, built and pushed the new Docker image und tried helm upgrade but it seems like nothing happened. With helm list I can see that revision was deployed but the changes to source code were not deployed. watch kubectl get pods