How to restart Celery gracefully without delaying tasks

早过忘川 提交于 2019-11-28 20:05:14

问题


We use Celery with our Django webapp to manage offline tasks; some of these tasks can run up to 120 seconds.

Whenever we make any code modifications, we need to restart Celery to have it reload the new Python code. Our current solution is to send a SIGTERM to the main Celery process (kill -s 15 `cat /var/run/celeryd.pid`), then to wait for it to die and restart it (python manage.py celeryd --pidfile=/var/run/celeryd.pid [...]).

Because of the long-running tasks, this usually means the shutdown will take a minute or two, during which no new tasks are processed, causing a noticeable delay to users currently on the site. I'm looking for a way to tell Celery to shutdown, but then immediately launch a new Celery instance to start running new tasks.

Things that didn't work:

  • Sending SIGHUP to the main process: this caused Celery to attempt to "restart," by doing a warm shutdown and then relaunching itself. Not only does this take a long time, it doesn't even work, because apparently the new process launches before the old one dies, so the new one complains ERROR: Pidfile (/var/run/celeryd.pid) already exists. Seems we're already running? (PID: 13214) and dies immediately. (This looks like a bug in Celery itself; I've let them know about it.)
  • Sending SIGTERM to the main process and then immediately launching a new instance: same issue with the Pidfile.
  • Disabling the Pidfile entirely: without it, we have no way of telling which of the 30 Celery process are the main process that needs to be sent a SIGTERM when we want it to do a warm shutdown. We also have no reliable way to check if the main process is still alive.

回答1:


celeryd has --autoreload option. If enabled, celery worker (main process) will detect changes in celery modules and restart all worker processes. In contrast to SIGHUP signal, autoreload restarts each process independently when the current executing task finishes. It means while one worker process is restarting the remaining processes can execute tasks.

http://celery.readthedocs.org/en/latest/userguide/workers.html#autoreloading




回答2:


I've recently fixed the bug with SIGHUP: https://github.com/celery/celery/pull/662




回答3:


rm *.pyc

This causes the updated tasks to be reloaded. I discovered this trick recently, I just hope there are no nasty side effects.




回答4:


A little late, but that can fixed by deleting the file called celerybeat.pid.

Worked for me.




回答5:


Can you launch it with a custom pid file name. Possibly timestamped, and key off of that to know which PID to kill?

CELERYD_PID_FILE="/var/run/celery/%n_{timestamp}.pid"

^I dont know the timestamp syntax but maybe you do or you can find it?

then use the current system time to kill off any old pids and launch a new one?




回答6:


Well you using SIGHUP (1) for warm shutdown of celery. I am not sure if it actually causes a warm shutdown. But SIGINT (2) would cause a warm shutdown. Try SIGINT in place of SIGHUP and then start celery manually in your script (I guess).




回答7:


I think you can try this:

kill -s HUP ``cat /var/run/celeryd.pid`` 
python manage.py celeryd --pidfile=/var/run/celeryd.pid

HUP may recycle every free worker and leave executing workers keep running and HUP will let these workers be trusted. Then you can safely restart a new celery worker main process and workers. Old workers may be killed itself when task has been finished.

I've use this way in our production and it seems safe now. Hope this can help you!



来源:https://stackoverflow.com/questions/9642669/how-to-restart-celery-gracefully-without-delaying-tasks

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!