django-celery

Celery revoke task before execute using django database

試著忘記壹切 提交于 2019-12-01 11:58:30
I'm using Django database instead of RabbitMQ for concurrency reasons. But I can't solve the problem of revoking a task before it execute. I found some answers about this matter but they don't seem complete or I can't get enough help. first answer second answer How can I extend celery task table using a model, add a boolean field (revoked) to set when I don't want the task to execute? Thanks. Since Celery tracks tasks by an ID, all you really need is to be able to tell which IDs have been canceled. Rather than modifying kombu internals, you can create your own table (or memcached etc) that

Running celeryd_multi with supervisor

笑着哭i 提交于 2019-12-01 04:09:16
I'm working with djcelery and supervisor. I was running a celery with supervisor and everything worked fine, once I realized that I needed to change it to celery multi everything broke up. If I run celeryd_multi in a terminal it works but always run in background, like supervisor need that the command run in foreground there where the problem is. This is my celery.ini : [program:celery_{{ division }}] command = {{ virtualenv_bin_dir }}/python manage.py celeryd_multi start default mailchimp -c:mailchimp 3 -c:default 5 --loglevel=info --logfile={{ log_dir }}/celery/%n.log --pidfile={{ run_dir }}

Python - Retry a failed Celery task from another queue

会有一股神秘感。 提交于 2019-12-01 00:53:27
I'm posting a data to a web-service in Celery. Sometimes, the data is not posted to web-service because of the internet is down, and the task is retried infinite times until it is posted. The retrying of the task is un-necessary because the net was down and hence its not required to re-try it again. I thought of a better solution, ie if a task fails thrice (retrying a min of 3 times), then it is shifted to another queue. This queue contains list of all failed tasks. Now when the internet is up and the data is posted over the net , ie the task has been completed from the normal queue, it then

Python - Retry a failed Celery task from another queue

人盡茶涼 提交于 2019-11-30 19:16:32
问题 I'm posting a data to a web-service in Celery. Sometimes, the data is not posted to web-service because of the internet is down, and the task is retried infinite times until it is posted. The retrying of the task is un-necessary because the net was down and hence its not required to re-try it again. I thought of a better solution, ie if a task fails thrice (retrying a min of 3 times), then it is shifted to another queue. This queue contains list of all failed tasks. Now when the internet is

How to run a Django celery task every 6am and 6pm daily?

允我心安 提交于 2019-11-30 17:18:33
问题 Hi I have Django Celery in my project. Currently its running every 12hour a day (Midnight/00:00am and 12:00pm). But I want it to run every 6am and 6pm a day. How can I do that? Thanks in advance. Task: from celery.task import periodic_task from celery.schedules import crontab from xxx.views import update_xx_task, execute_yy_task @periodic_task(run_every=crontab(minute=0, hour='*/12'), queue='nonsdepdb3115', options={'queue': 'nonsdepdb3115'}) def xxx_execute_xx_task(): execute_yy_task()

django celery: how to set task to run at specific interval programmatically

孤人 提交于 2019-11-30 09:59:01
I found that I can set the task to run at specific interval at specific times from here , but that was only done during task declaration. How do I set a task to run periodically dynamically? The schedule is derived from a setting , and thus seems to be immutable at runtime. You can probably accomplish what you're looking for using Task ETAs . This guarantees that your task won't run before the desired time, but doesn't promise to run the task at the designated time—if the workers are overloaded at the designated ETA, the task may run later. If that restriction isn't an issue, you could write a

Retrying celery failed tasks that are part of a chain

走远了吗. 提交于 2019-11-30 08:08:44
问题 I have a celery chain that runs some tasks. Each of the tasks can fail and be retried. Please see below for a quick example: from celery import task @task(ignore_result=True) def add(x, y, fail=True): try: if fail: raise Exception('Ugly exception.') print '%d + %d = %d' % (x, y, x+y) except Exception as e: raise add.retry(args=(x, y, False), exc=e, countdown=10) @task(ignore_result=True) def mul(x, y): print '%d * %d = %d' % (x, y, x*y) and the chain: from celery.canvas import chain chain(add

Detect whether Celery is Available/Running

天大地大妈咪最大 提交于 2019-11-30 06:17:37
问题 I'm using Celery to manage asynchronous tasks. Occasionally, however, the celery process goes down which causes none of the tasks to get executed. I would like to be able to check the status of celery and make sure everything is working fine, and if I detect any problems display an error message to the user. From the Celery Worker documentation it looks like I might be able to use ping or inspect for this, but ping feels hacky and it's not clear exactly how inspect is meant to be used (if

How can I disable the Django Celery admin modules?

喜夏-厌秋 提交于 2019-11-30 05:18:01
I have no need to the celery modules in my Django admin. Is there a way I could remove it? okm To be more specific, in admin.py of any app inside INSTALLED_APPS after 'djcelery' from django.contrib import admin from djcelery.models import ( TaskState, WorkerState, PeriodicTask, IntervalSchedule, CrontabSchedule) admin.site.unregister(TaskState) admin.site.unregister(WorkerState) admin.site.unregister(IntervalSchedule) admin.site.unregister(CrontabSchedule) admin.site.unregister(PeriodicTask) You can simply unregister celerys models like admin.site.unregister(CeleryModelIdoNotWantInAdmin) 来源:

Notify celery task of worker shutdown

醉酒当歌 提交于 2019-11-30 04:22:23
问题 I am using celery 2.4.1 with python 2.6, the rabbitmq backend, and django. I would like my task to be able to clean up properly if the worker shuts down. As far as I am aware you cannot supply a task destructor so I tried hooking into the worker_shutdown signal. Note: AbortableTask only works with the database backend so I cant use that. from celery.signals import worker_shutdown @task def mytask(*args) obj = DoStuff() def shutdown_hook(*args): print "Worker shutting down" # cleanup nicely