celery

Combining chains, groups and chunks with Celery

梦想的初衷 提交于 2021-02-10 14:20:58
问题 I want to use Celery for a Url grabber. I have a list of Url, and I must do a HTTP request on every URL and write the result in a file (same file for the whole list). My first idea was to make this code in the task which is called by Celery beat every n minutes : @app.task def get_urls(self): results = [get_url_content.si( url=url ) for url in urls] ch = chain( group(*results), write_result_on_disk.s() ) return ch() This code works pretty well, but there is 1 problem : I have a thousand of

Celery : launch chord callback after its associated body

点点圈 提交于 2021-02-10 12:03:44
问题 When I launch a list of chord() containing a group of tasks and a callback, the callbacks are called only after all the group of tasks have been done, even the tasks which are not in the current chord. Here is the code for a better explanation : import time from celery import Celery, group, chord app = Celery('tasks') app.config_from_object('celeryconfig') @app.task(name='SHORT_TASK') def short_task(t): time.sleep(t) return t @app.task(name='FINISH_GROUP') def finish_group(res, nb): print(

Celery : launch chord callback after its associated body

蓝咒 提交于 2021-02-10 12:03:02
问题 When I launch a list of chord() containing a group of tasks and a callback, the callbacks are called only after all the group of tasks have been done, even the tasks which are not in the current chord. Here is the code for a better explanation : import time from celery import Celery, group, chord app = Celery('tasks') app.config_from_object('celeryconfig') @app.task(name='SHORT_TASK') def short_task(t): time.sleep(t) return t @app.task(name='FINISH_GROUP') def finish_group(res, nb): print(

Celery tasks registering in multiple queues

廉价感情. 提交于 2021-02-10 05:09:33
问题 I am using celery in Django (1.9) with RabbitMQ server. I have four different queues and I am registering a task in one of these four queues. The issue is all my tasks are registered in all four queue. Like I have a task named add and have four queues A, B, C and D. Ideally task should be registered in only A queue (As I registered) But It is showing in all four queues. could not rectify what is the actual issue. Please help. 来源: https://stackoverflow.com/questions/50203203/celery-tasks

Celery tasks registering in multiple queues

女生的网名这么多〃 提交于 2021-02-10 05:05:49
问题 I am using celery in Django (1.9) with RabbitMQ server. I have four different queues and I am registering a task in one of these four queues. The issue is all my tasks are registered in all four queue. Like I have a task named add and have four queues A, B, C and D. Ideally task should be registered in only A queue (As I registered) But It is showing in all four queues. could not rectify what is the actual issue. Please help. 来源: https://stackoverflow.com/questions/50203203/celery-tasks

Celery tasks registering in multiple queues

[亡魂溺海] 提交于 2021-02-10 05:04:46
问题 I am using celery in Django (1.9) with RabbitMQ server. I have four different queues and I am registering a task in one of these four queues. The issue is all my tasks are registered in all four queue. Like I have a task named add and have four queues A, B, C and D. Ideally task should be registered in only A queue (As I registered) But It is showing in all four queues. could not rectify what is the actual issue. Please help. 来源: https://stackoverflow.com/questions/50203203/celery-tasks

django.db.utils.ProgrammingError: relation already exists on OenBSD vps

℡╲_俬逩灬. 提交于 2021-02-09 06:48:34
问题 I am getting this error. I tried with migrate --fake default but it doesn't seems to be working. attached is output of "python manage.py migrate" My set up is Django 1.6 + celery3.1.12 + postgresql + gunicorn on OneBSD VPS. Running migrations for users: - Migrating forwards to 0007_auto__del_field_profile_weekly_digest__del_field_profile_daily_digest_. > users:0001_initial FATAL ERROR - The following SQL query failed: CREATE TABLE "users_user" ("id" serial NOT NULL PRIMARY KEY, "password"

celery .delay hangs (recent, not an auth problem)

坚强是说给别人听的谎言 提交于 2021-02-08 15:33:48
问题 I am running Celery 2.2.4/djCelery 2.2.4, using RabbitMQ 2.1.1 as a backend. I recently brought online two new celery servers -- I had been running 2 workers across two machines with a total of ~18 threads and on my new souped up boxes (36g RAM + dual hyper-threaded quad-core), I am running 10 workers with 8 threads each, for a total of 180 threads -- my tasks are all pretty small so this should be fine. The nodes have been running fine for the last few days, but today I noticed that .delaay(

Using celery with Flask app context gives “Popped wrong app context.” AssertionError

空扰寡人 提交于 2021-02-08 13:45:41
问题 I'm more or less using the setup to run Celery tasks using your flask app context from here: http://flask.pocoo.org/docs/0.10/patterns/celery/ I'm getting the same error message as Create, manage and kill background tasks in flask app but, I'm getting it in the actual worker where the Celery task is being executed. Here is the trace: worker_1 | Traceback (most recent call last): worker_1 | File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task worker_1 | R

Django / Celery / Kombu worker error: Received and deleted unknown message. Wrong destination?

北城余情 提交于 2021-02-08 08:35:17
问题 It seems as though messages are not getting put onto the queue properly. I'm using Django with Celery and Kombu to make use of Django's own database as a Broker Backend. All I need is a very simple Pub/Sub setup. It will eventually deploy to Heroku, so I'm using foreman to run locally. Here is the relevant code and info: pip freeze Django==1.4.2 celery==3.0.15 django-celery==3.0.11 kombu==2.5.6 Procfile web: source bin/activate; python manage.py run_gunicorn -b 0.0.0.0:$PORT -w 4; python