I\'m using celery (solo pool with concurrency=1) and I want to be able to shut down the worker after a particular task has run. A caveat is that I want to avoid any possibility
The recommended process for shutting down a worker is to send the TERM signal. This will cause a celery worker to shutdown after completing any currently running tasks. If you send a QUIT signal to the worker's main process, the worker will shutdown immediately.
The celery docs, however, usually discuss this in terms of managing celery from a command line or via systemd/initd, but celery additionally provides a remote worker control API via celery.app.control.
You can revoke a task to prevent workers from executing the task. This should prevent the loop you are experiencing. Further, control supports shutdown of a worker in this manner as well.
So I imagine the following will get you the behavior you desire.
@app.task(bind=True)
def shutdown(self):
app.control.revoke(self.id) # prevent this task from being executed again
app.control.shutdown() # send shutdown signal to all workers
Since it's not currently possible to ack the task from within the task, then continue executing said task, this method of using revoke circumvents this problem so that, even if the task is queued again, the new worker will simply ignore it.
Alternatively, the following would also prevent a redelivered task from being executed a second time...
@app.task(bind=True)
def some_task(self):
if self.request.delivery_info['redelivered']:
raise Ignore() # ignore if this task was redelivered
print('This should only execute on first receipt of task')
Also worth noting AsyncResult also has a revoke method that calls self.app.control.revoke for you.