celery

Where does Django-celery/RabbitMQ store task results?

a 夏天 提交于 2019-12-11 05:03:39
问题 My celery database backend settings are: CELERY_RESULT_BACKEND = "database" CELERY_RESULT_DBURI = "mysqlite.db" I am using RabbitMQ as my messager. It doesn't seem like any results are getting stored in the db, and yet I can read the results after the task is complete. Are they in memory or a RabbitMQ cache? I haven't tried reading the same result multiple times, so maybe its a read once then poof! 回答1: CELERY_RESULT_DBURI is for the sqlalchemy result backend, not the Django one. The Django

Celery: different settings for task_acks_late per worker / add custom option to celery

大兔子大兔子 提交于 2019-12-11 04:18:12
问题 This question is a follow up of django + celery: disable prefetch for one worker, Is there a bug? I had a problem with celery (see the question that I follow up) and in order to resolve it I'd like to have two celery workers with -concurrency 1 each but with two different settings of task_acks_late. My current approach is working, but in my opinion not very beautiful. I am doing the following: in settings.py of my django project: CELERY_TASK_ACKS_LATE = os.environ.get("LACK", "False") ==

How to tell if django celery task is properly running scrapy spider

家住魔仙堡 提交于 2019-12-11 03:35:13
问题 I have written a scrapy spider that I'm running inside of a django celery task. When I run the task using the command: python manage.py celery worker --loglevel=info from this tutorial the task runs in the terminal and it seems that the scrapy log begins to start but soon after the log begins to come up on the screen, it seems that the celery script takes over the terminal window. I'm still new to using celery so I can't tell what is happening to the task. Here is the code for the task.py

How to override __call__ in celery on main?

倖福魔咒の 提交于 2019-12-11 02:48:38
问题 I've been using an abstract Task and overriding the __call__ method to handle some things before each task executed like such: class CoreTaskHandler(Task): abstract = True def __call__(self, *args, **kwargs): But the __call__ method gets executed on the worker, I need some override that will get executed on main, not the worker each time the task gets "delayed". Does anyone have an idea how would I go on about doing that? 回答1: I have fixed this by overriding the apply_sync method in Task:

“Unknown task” error in Celery Flower when posting a new task

百般思念 提交于 2019-12-11 01:38:05
问题 I'm running celery 3.1.11 and flower 0.6.0 . I have a celery application configured as such; # myapp.tasks.celery.py from __future__ import absolute_import from celery import Celery class Config(object): BROKER_URL = 'amqp://' CELERY_RESULT_BACKEND = 'amqp' CELERY_TASK_RESULT_EXPIRES = None CELERY_RESULT_SERIALIZER = 'json' CELERY_INCLUDE = [ 'myapp.tasks.source', 'myapp.tasks.page', 'myapp.tasks.diffusion', 'myapp.tasks.place', ] ) celery = Celery('myapp') celery.config_from_object(Config)

Run celery with django on AWS Elastic Beanstalk using environment variables

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-11 01:12:10
问题 I want to run celery on AWS Elastic Beanstalk with my Django app. I followed this great answer of @yellowcap (How do you run a worker with AWS Elastic Beanstalk?). So my supervisord.conf looks like this : files: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash # Get django environment variables celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' |

Duplicated tasks after time change

风格不统一 提交于 2019-12-11 00:53:24
问题 I don't know exactly why, but I am getting duplicated tasks. I thing this may be related with time change of the last weekend (The clock was delayed for an hour in the system). The first task should not be executed, since I say explicitly hour=2 . Any idea why this happens? [2017-11-01 01:00:00,001: INFO/Beat] Scheduler: Sending due task every-first-day_month (app.users.views.websites_down) [2017-11-01 02:00:00,007: INFO/Beat] Scheduler: Sending due task every-first-day_month (app.users.views

Celery Beat: How to define periodic tasks defined as classes (class based tasks)

非 Y 不嫁゛ 提交于 2019-12-11 00:50:24
问题 Till now I had only worked with Celery tasks defined as functions. I used to define their periodicity in the CELERYBEAT_SCHEDULE parameter. Like this: from datetime import timedelta CELERYBEAT_SCHEDULE = { 'add-every-30-seconds': { 'task': 'tasks.add', 'schedule': timedelta(seconds=30), 'args': (16, 16) }, } Now I am trying to use class-based tasks, like this one: class MyTask(Task): """My Task.""" def run(self, source, *args, **kwargs): """Run the celery task.""" logger.info("Hi!") My

celery how to implement single queue with multiple workers executing in parallel

血红的双手。 提交于 2019-12-10 23:48:47
问题 I am currently running celery 4.0.2 with a single worker like this: celery.py: app = Celery('project', broker='amqp://jimmy:jimmy123@localhost/jimmy_vhost', backend='rpc://', include=['project.tasks']) if __name__ == '__main__': app.start() app.name tasks.py: from .celery import app from celery.schedules import schedule from time import sleep, strftime app.conf.beat_schedule = { 'planner_1': { 'task': 'project.tasks.call_orders', 'schedule': 1800, }, 'planner_2': { 'task': 'project.tasks.call

Can celery celerybeat dynamically add/remove tasks in runtime?

时光总嘲笑我的痴心妄想 提交于 2019-12-10 23:20:16
问题 I have a project that does not include Django, so i can't use djcelery. But i found the modification of django-celery DatabaseSchedule that using sqlalchemy. It works fine as like djceley's DatabaseScheule did. But the only problem is that it doesn't seem to send tasks that were added in runtime, then i restart the celery-beat, the tasks that were added before will be sent successfully. So, is it possible to dynamically add/remove tasks without restarting celery-beat? Thanks for any advice.