celery

Celery Starting a Task when Other Tasks have Completed

為{幸葍}努か 提交于 2019-12-22 10:43:36
问题 I have 3 tasks in Celery.. celery_app.send_task('tasks.read_cake_recipes') celery_app.send_task('tasks.buy_ingredients') celery_app.send_task('tasks.make_cake') Both read_cake_recipes and buy_ingredients don't have any dependancies, however before the task make_cake can be run both read_cake_recipes and buy_ingredients need to have finished. make_cake can be run at ANYTIME after the first two have started. But make_cake has no idea if the other tasks have completed. So if read_cake_recipes or

How can I minimise connections with django-celery when using CloudAMQP through dotcloud?

别来无恙 提交于 2019-12-22 09:58:18
问题 After spending a few weeks getting django-celery-rabbitmq working on dotcloud I have discovered that dotcloud is no longer supporting rabbitmq. Instead they recommend CloudAMQP. So I've set up CloudAMQP as per the tutorials: http://docs.dotcloud.com/tutorials/python/django-celery/ http://docs.dotcloud.com/tutorials/more/cloudamqp/ http://www.cloudamqp.com/docs-dotcloud.html And the service works fine. However, even when I do not have any processes running, CloudAMQP says there are 3

How to throttle script that creates celery tasks faster than they're consumed?

喜欢而已 提交于 2019-12-22 09:41:15
问题 I have a script that generates millions of Celery tasks, one per row in the DB. Is there a way to throttle it so that it doesn't completely flood Celery? Ideally I want to keep Celery busy, but I don't want the length of the Celery queue to exceed a few dozen tasks since that's just a waste of memory (especially since without some kind of throttle the script will add millions of tasks to the queue almost instantly). 回答1: I've spent some time on this problem over the past several days and came

Periodic Tasks with Celery and Django

人走茶凉 提交于 2019-12-22 08:50:30
问题 I'm having troubles getting a periodic tasks to run with Celery 3.1.8, Django 1.6.1, and RabbitMQ. I'm a bit confused with the current documentation as I understand that django-celery is not needed anymore to get Celery running with Django. I have a feeling that I'm not running the worker correctly, but after searching for a solution on SO and googling, I'm in need of help. Could anyone point me in the right direction with this? settings.py (not sure if I need this since I have a @periodic

Celery task history

时间秒杀一切 提交于 2019-12-22 06:56:46
问题 I am building a framework for executing tasks on top of Celery framework. I would like to see the list of recently executed tasks (for the recent 2-7 days). Looking on the API I can find app.backend object, but cannot figure out how to make a query to fetch tasks. For example I can use backends like Redis or database. I do not want to explicitly write SQL queries to database. Is there a way to work with task history/results with API? I tried to use Flower, but it can only handle events and

django celery - how to send request.FILES['photo'] to task

有些话、适合烂在心里 提交于 2019-12-22 05:46:09
问题 i'm trying to send request.FILES['photo'], an uploaded file from my site, to tCelery via: tasks.upload_photos.delay(img=request.FILES['photo']) I get a pickle error because it cannot serialize it. What is the way to send a file to task? error: "can't pickle StringO objects" thanks. 回答1: Read the file contents into a string, then pack it with the content type in a dict and send that. 回答2: If you plan on saving the file, you can save the file to a model, then pass the id/pk to a celery task. 来源

Cannot start Celery Worker (Kombu.asynchronous.timer)

与世无争的帅哥 提交于 2019-12-22 05:38:28
问题 I followed the first steps with Celery (Django) and trying to run a heavy process in the background. I have RabbitMQ server installed. However, when I try, celery -A my_app worker -l info it throws the following error File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "

Django celery worker to send real-time status and result messages to front end

旧城冷巷雨未停 提交于 2019-12-22 05:33:08
问题 In a django app I'm running async tasks and would like to show progress, errors etc to the user. If there are errors, the user should be redirect to a page where additional input or some action is required to fix the problem. What is the best way to communicate from the celery work back to the front end? Here's a basic structure in pseudo code: # views.py from tasks import run_task def view_task(): run_task.delay() return render(request, 'template.html') # tasks.py from compute_module import

celery periodic tasks not executing

蹲街弑〆低调 提交于 2019-12-22 04:44:23
问题 I am learning celery and I created a project to test my configuration. I installed celery==4.0.0 and django-celery-beat==1.0.1 according to the latest documentation. In drf_project(main project dir with manage.py)/drf_project/celery.py from __future__ import absolute_import, unicode_literals from celery import Celery import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'drf_project.settings') app = Celery('drf_project') app.config_from_object('django.conf:settings', namespace='CELERY')

Django, RabbitMQ, & Celery - why does Celery run old versions of my tasks after I update my Django code in development?

≯℡__Kan透↙ 提交于 2019-12-22 04:18:09
问题 So I have a Django app that occasionally sends a task to Celery for asynchronous execution. I've found that as I work on my code in development, the Django development server knows how to automatically detect when code has changed and then restart the server so I can see my changes. However, the RabbitMQ/Celery section of my app doesn't pick up on these sorts of changes in development. If I change code that will later be run in a Celery task, Celery will still keep running the old version of