Kubernetes Pod's containers not running when using sh commands

天大地大妈咪最大 提交于 2021-02-11 18:20:07

问题


Pod containers are not ready and stuck under Waiting state over and over every single time after they run sh commands (/bin/sh as well). As example all pod's containers seen at https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps they just go on "Complete" status after executing the sh command, or if I set "restartPolicy: Always" they have the "Waiting" state for the reason CrashLoopBackOff. (Containers work fine if I do not set any command on them. If I use the sh command within container, after creating them I can read using "kubectl logs" the env variable was set correctly.

The expected behaviour is to get pod's containers running after they execute the sh command.

I cannot find references regarding this particular problem and I need little help if possible, thank you very much in advance!

Please disregard I tried different images, the problem happens either way.

environment: Kubernetes v 1.17.1 on qemu VM

yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
data:
  how: very
---
apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      ports:
      - containerPort: 88
      command: [ "/bin/sh", "-c", "env" ]
      env:
        # Define the environment variable
        - name: SPECIAL_LEVEL_KEY
          valueFrom:
            configMapKeyRef:
              # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
              name: special-config
              # Specify the key associated with the value
              key: how
  restartPolicy: Always

describe pod:

kubectl describe pod dapi-test-pod
Name:         dapi-test-pod
Namespace:    default
Priority:     0
Node:         kw1/10.1.10.31
Start Time:   Thu, 21 May 2020 01:02:17 +0000
Labels:       <none>
Annotations:  cni.projectcalico.org/podIP: 192.168.159.83/32
              kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dapi-test-pod","namespace":"default"},"spec":{"containers":[{"command...
Status:       Running
IP:           192.168.159.83
IPs:
  IP:  192.168.159.83
Containers:
  test-container:
    Container ID:  docker://63040ec4d0a3e78639d831c26939f272b19f21574069c639c7bd4c89bb1328de
    Image:         nginx
    Image ID:      docker-pullable://nginx@sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
    Port:          88/TCP
    Host Port:     0/TCP
    Command:
      /bin/sh
      -c
      env
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 21 May 2020 01:13:21 +0000
      Finished:     Thu, 21 May 2020 01:13:21 +0000
    Ready:          False
    Restart Count:  7
    Environment:
      SPECIAL_LEVEL_KEY:  <set to the key 'how' of config map 'special-config'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-zqbsw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zqbsw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/dapi-test-pod to kw1
  Normal   Pulling    12m (x4 over 13m)     kubelet, kw1       Pulling image "nginx"
  Normal   Pulled     12m (x4 over 13m)     kubelet, kw1       Successfully pulled image "nginx"
  Normal   Created    12m (x4 over 13m)     kubelet, kw1       Created container test-container
  Normal   Started    12m (x4 over 13m)     kubelet, kw1       Started container test-container
  Warning  BackOff    3m16s (x49 over 13m)  kubelet, kw1       Back-off restarting failed container

回答1:


This happens because the process in the container you are running has completed and the container shuts down, and so kubernetes marks the pod as completed.

If the command that is defined in the docker image as part of CMD, or if you've added your own command as you have done, then the container shuts down after the command completed. It's the same reason why when you run Ubuntu using plain docker, it starts up then shuts down directly afterwards.

For pods, and their underlying docker container to continue running, you need to start a process that will continue running. In your case, running the env command completes right away.

If you set the pod to restart Always, then kubernetes will keep trying to restart it until it's reached it's back off threshold.

One off commands like you're running are useful for utility type things. I.e. do one thing then get rid of the pod.

For example:

kubectl run tester --generator run-pod/v1 --image alpine --restart Never --rm -it -- /bin/sh -c env

To run something longer, start a process that continues running.

For example:

kubectl run tester --generator run-pod/v1 --image alpine -- /bin/sh -c "sleep 30"

That command will run for 30 seconds, and so the pod will also run for 30 seconds. It will also use the default restart policy of Always. So after 30 seconds the process completes, Kubernetes marks the pod as complete, and then restarts it to do the same things again.

Generally pods will start a long running process, like a web server. For Kubernetes to know if that pod is healthy, so it can do it's high availability magic and restart it if it cashes, it can use readiness and liveness probes.




回答2:


You can use this manifest; The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. Multiline args make it simple and easy to read. Your pod will display its environment variables and also start the NGINX process without stopping:

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
data:
  how: very
---
apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      ports:
        - containerPort: 88
      command: ["/bin/sh", "-c"]
      args:
        - env;
          nginx -g 'daemon off;';
      env:
        # Define the environment variable
        - name: SPECIAL_LEVEL_KEY
          valueFrom:
            configMapKeyRef:
              # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
              name: special-config
              # Specify the key associated with the value
              key: how
  restartPolicy: Always


来源:https://stackoverflow.com/questions/61926036/kubernetes-pods-containers-not-running-when-using-sh-commands

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!