Python - Retry a failed Celery task from another queue

会有一股神秘感。 提交于 2019-12-01 00:53:27

By default celery adds all tasks to queue named celery. So you can run your task here and when an exception occurs, it retries, once it reaches maximum retries, you can shift them to a new queue say foo

from celery.exceptions import MaxRetriesExceededError

@shared_task(default_retry_delay = 1 * 60, max_retries = 10)
def post_data_to_web_service(data,url):
    try:
        #do something with given args

 except MaxRetriesExceededError:
        post_data_to_web_service([data, url], queue='foo')

 except Exception, exc:
        raise post_data_to_web_service.retry(exc=exc) 

When you start your worker, this task will try to do something with given data. If it fails it will retry 10 times with a dealy of 60 seconds. Then when it encounters MaxRetriesExceededError it posts the same task to new queue foo.

To consume these tasks you have to start a new worker

celery worker -l info -A my_app -Q foo

or you can also consume this task from the default worker if you start it with

 celery worker -l info -A my_app -Q celery,foo
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!