Restart container within pod

穿精又带淫゛_ 提交于 2019-11-27 17:29:45

Is it possible to restart a single container

Not through kubectl, although depending on the setup of your cluster you can "cheat" and docker kill the-sha-goes-here, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)

how do I restart the pod

That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just kubectl delete pod test-1495806908-xn5jn and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect kubectl get pods to return test-1495806908-xn5jn ever again)

There are cases when you want to restart a specific container instead of deleting the pod and letting Kubernetes recreate it.

Doing a kubectl exec POD_NAME -c CONTAINER_NAME reboot worked for me.

ROY

Both pod and container are ephemeral, try to use the following command to stop the specific container and the k8s cluster will restart a new container.

kubectl exec -it [POD_NAME] -c [CONTAINER_NAME] -- /bin/sh -c "kill 1"

This will send a SIGTERM signal to process 1, which is the main process running in the container. All other processes will be children of process 1, and will be terminated after process 1 exits. See the kill manpage for other signals you can send.

The whole reason for having kubernetes is so it manages the containers for you so you don't have to care so much about the lifecyle of the containers in the pod.

Since you have a deployment setup that uses replica set. You can delete the pod using kubectl delete pod test-1495806908-xn5jn and kubernetes will manage the creation of a new pod with the 2 containers without any downtime. Trying to manually restart single containers in pods negates the whole benefits of kubernetes.

We use a pretty convenient command line to force re-deployment of fresh images on integration pod.
We noticed that our alpine containers all run their "sustaining" command on PID 5. Therefore, sending it a SIGTERM signal takes the container down. imagePullPolicy being set to Always has the kubelet re-pull the latest image when it brings the container back.

kubectl exec -i [pod name] -c [container-name] -- kill -15 5

Killing the process specified in the Dockerfile's CMD / ENTRYPOINT works for me. (The container restarts automatically)

Rebooting was not allowed in my container, so I had to use this workaround.

Ajay Reddy

All the above answers have mentioned deleting the pod...but if you have many pods of the same service then it would be tedious to delete each one of them...

Therefore, I propose the following solution, restart:

  • 1) Set scale to zero :

     kubectl scale deployment <<name>> --replicas=0 -n service 
    

    The above command will terminate all your pods with the name <<name>>

  • 2) To start the pod again, set the replicas to more than 0

    kubectl scale deployment <<name>> --replicas=2 -n service
    

    The above command will start your pods again with 2 replicas.

There was an issue in coredns pod, I deleted such pod by

kubectl delete pod -n=kube-system coredns-fb8b8dccf-8ggcf

Its pod will restart automatically.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!