On kubernetes helm how to replace a pod with new config values

二次信任 提交于 2019-12-12 09:46:00

问题


I am using helm charts to deploy pods with a "ConfigMap" managing the configurations.

I edit ConfigMap directly to make changes to configuration files and then delete pods using kubectl delete, for the new configuration to take effect.

Is there any easy way using helm to replace a running pod with the new configuration without executing "kubectl delete" command


回答1:


We have found that using --recreate-pods will immediately terminate all running pods of that deployment, meaning some downtime for your service. In other words, there will be no rolling update of your pods.

The issue to address this in Helm is still open: https://github.com/kubernetes/helm/issues/1702

Instead helm suggests adding a checksum of your configuration files to the deployment in an annotation. That way the deployment will have a different hash and essentially look 'new' to helm, causing it to update correctly.

The sha256sum function can be used to ensure a deployment's annotation section is updated if another file changes:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]

From the docs here: https://helm.sh/docs/charts_tips_and_tricks/#automatically-roll-deployments-when-configmaps-or-secrets-change




回答2:


You can run

helm upgrade --recreate-pods

to do this.




回答3:


If you need a rolling update instead of immediatly terminating pods, add

date: "{{ .Release.Time.Seconds }}"

into the spec/template/metadata/labels.

The release will then have a config change, which triggers a rolling update if set as spec/stategy/type.

In case you just changed a ConfigMap or Secret, have a look at https://helm.sh/docs/developing_charts/#automatically-roll-deployments-when-configmaps-or-secrets-change




回答4:


@Oliver solution didn't work for me because pods were not recreated by updating the deployment annotations.

The solution is to use dynamic config map names based on your values.yaml file.

In values.yaml:

configVersion: # Change those numbers to force recreating pods
  myApp: 1

In your config map:

metadata:
  name: {{ .Release.Name }}-my-config-v{{ .Values.configVersion.myApp}}

In your deployment:

- configMapRef:
  name: {{ .Release.Name }}-my-config-v{{ .Values.configVersion.myApp}}


来源:https://stackoverflow.com/questions/44268277/on-kubernetes-helm-how-to-replace-a-pod-with-new-config-values

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!