celeryd

Celery Result error “args must be a list or tuple”

走远了吗. 提交于 2019-12-10 18:08:31
问题 I am running a Django website and have just gotten Celery to run, but I am getting confusing errors. Here is how the code is structured. In tests.py: from tasks import * from celery.result import AsyncResult project = Project.objects.create() # initalize various sub-objects of the project c = function.delay(project.id) r = AsyncResult(c.id).ready() f = AsyncResult(c.id).failed() # wait until the task is done while not r and not f: r = AsyncResult(c.id).ready() f = AsyncResult(c.id).failed()

Celery tries to connect to the wrong broker

↘锁芯ラ 提交于 2019-12-07 03:22:35
问题 I have in my celery configuration BROKER_URL = 'redis://127.0.0.1:6379' CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379' Yet whenever I run the celeryd, I get this error consumer: Cannot connect to amqp://guest@127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds... Why is it not connecting to the redis broker I set it up with, which is running btw? 回答1: import your celery and add your broker like that : celery = Celery('task', broker='redis://127.0.0.1:6379') celery

Celery tries to connect to the wrong broker

拟墨画扇 提交于 2019-12-05 09:46:42
I have in my celery configuration BROKER_URL = 'redis://127.0.0.1:6379' CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379' Yet whenever I run the celeryd, I get this error consumer: Cannot connect to amqp://guest@127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 2.00 seconds... Why is it not connecting to the redis broker I set it up with, which is running btw? import your celery and add your broker like that : celery = Celery('task', broker='redis://127.0.0.1:6379') celery.config_from_object(celeryconfig) If you followed First Steps with Celery tutorial, specifically: app.config

Share memory areas between celery workers on one machine

烂漫一生 提交于 2019-12-04 05:29:05
I want to share small pieces of informations between my worker nodes (for example cached authorization tokens, statistics, ...) in celery. If I create a global inside my tasks-file it's unique per worker (My workers are processes and have a life-time of 1 task/execution). What is the best practice? Should I save the state externally (DB), create an old-fashioned shared memory (could be difficult because of the different pool implementations in celery)? Thanks in advance! I finally found a decent solution - core python multiprocessing-Manager: from multiprocessing import Manager manag = Manager

Does the number of celeryd processes depend on the --concurrency setting?

邮差的信 提交于 2019-12-04 01:24:13
问题 We are running Celery behind Supervisor and start it with celeryd --events --loglevel=INFO --concurrency=2 This, however, creates a process graph that is up to three layers deep and contains up to 7 celeryd processes (Supervisor spawns one celeryd, which spawns several others, which again spawn processes). Our machine has two CPU cores. Are all of these processes working on tasks? Are maybe some of them just worker pools? How is the --concurrency setting connected to the number of processes

Using celeryd as a daemon with multiple django apps?

泄露秘密 提交于 2019-12-03 11:06:54
问题 I'm just starting using django-celery and I'd like to set celeryd running as a daemon. The instructions, however, appear to suggest that it can be configured for only one site/project at a time. Can the celeryd handle more than one project, or can it handle only one? And, if this is the case, is there a clean way to set up celeryd to be automatically started for each configuration, which requiring me to create a separate init script for each one? 回答1: Like all interesting questions, the

How does a Celery worker consuming from multiple queues decide which to consume from first?

ぐ巨炮叔叔 提交于 2019-12-02 22:08:15
I am using Celery to perform asynchronous background tasks, with Redis as the backend. I'm interested in the behaviour of a Celery worker in the following situation: I am running a worker as a daemon using celeryd . This worker has been assigned two queues to consume through the -Q option: celeryd -E -Q queue1,queue2 How does the worker decide where to fetch the next task to consume from? Does it randomly consume a task from either queue1 or queue2 ? Will it prioritise fetching from queue1 because it is first in the list of arguments passed to -Q ? From my testing, it processes multiple queues

Does the number of celeryd processes depend on the --concurrency setting?

£可爱£侵袭症+ 提交于 2019-12-01 05:13:50
We are running Celery behind Supervisor and start it with celeryd --events --loglevel=INFO --concurrency=2 This, however, creates a process graph that is up to three layers deep and contains up to 7 celeryd processes (Supervisor spawns one celeryd, which spawns several others, which again spawn processes). Our machine has two CPU cores. Are all of these processes working on tasks? Are maybe some of them just worker pools? How is the --concurrency setting connected to the number of processes actually spawned? You shouldn't have 7 processes if --concurrency is 2. The actual processes started is:

Best way to map a generated list to a task in celery

风流意气都作罢 提交于 2019-11-30 15:12:17
I am looking for some advice as to the best way to map a list generated from a task to another task in celery. Let's say I have a task called parse , which parses a PDF document and outputs a list of pages. Each page then needs to be individually passed to another task called feed . This all needs to go inside a task called process So, one way I could do that is this: @celery.task def process: pages = parse.s(path_to_pdf).get() feed.map(pages) Of course, that is not a good idea because I am calling get() inside a task. Additionally this is inefficient, since my parse task is wrapped around a

Increase celery retry time each retry cycle

我只是一个虾纸丫 提交于 2019-11-30 03:29:00
I do retries with celery like in the Docs-Example: @task() def add(x, y): try: ... except Exception, exc: add.retry(exc=exc, countdown=60) # override the default and # retry in 1 minute How can I increase the retry-countdown everytime the retry occurs for this job - e.g. 60 seconds, 2 minutes, 4 minutes and so on until the MaxRetriesExceeded is raised? Since version 4.2 you can use options autoretry_for and retry_backoff for this purposes, for example: @task(max_retries=10, autoretry_for=(Exception,), retry_backoff=60) def add(x, y): pass Here is a simple way to create bigger delay each time