celery

Unit testing an AsyncResult in celery

◇◆丶佛笑我妖孽 提交于 2019-12-20 12:15:33
问题 I am trying to test some celery functionality in Django's unit testing framework, but whenever I try to check an AsyncResult the tests act like it was never started. I know this code works in a real environment with RabbitMQ, so I was just wondering why it didn't work when using the testing framework. Here is an example: @override_settings(CELERY_EAGER_PROPAGATES_EXCEPTIONS = True, CELERY_ALWAYS_EAGER = True, BROKER_BACKEND = 'memory',) def test_celery_do_work(self): result = myapp.tasks

How does a Celery worker consuming from multiple queues decide which to consume from first?

痞子三分冷 提交于 2019-12-20 10:24:15
问题 I am using Celery to perform asynchronous background tasks, with Redis as the backend. I'm interested in the behaviour of a Celery worker in the following situation: I am running a worker as a daemon using celeryd . This worker has been assigned two queues to consume through the -Q option: celeryd -E -Q queue1,queue2 How does the worker decide where to fetch the next task to consume from? Does it randomly consume a task from either queue1 or queue2 ? Will it prioritise fetching from queue1

celery missed heartbeat (on_node_lost)

孤街浪徒 提交于 2019-12-20 10:16:23
问题 I just upgraded to celery 3.1 and now I see this i my logs :: on_node_lost - INFO - missed heartbeat from celery@queue_name for every queue/worker in my cluster. According to the docs BROKER_HEARTBEAT is off by default and I haven't configured it. Should I explicitly set BROKER_HEARTBEAT=0 or is there something else that I should be checking? 回答1: Saw the same thing, and noticed a couple of things in the log files. 1) There were messages about time drift at the start of the log and occasional

Starting flask server in background

大兔子大兔子 提交于 2019-12-20 09:47:44
问题 I have a flask application which I am currently starting up in the following way: #phantom.py __author__ = 'uruddarraju' from phantom.api.v1 import app app.run(host='0.0.0.0', port=8080, debug=True) and when I run this script, it executes successfully by printing: loading config from /home/uruddarraju/virtualenvs/PHANTOMNEW/Phantom/etc/phantom/phantom.ini * Running on http://0.0.0.0:8080/ But it never returns and if I do a CTRL-C the server stops. I am trying to deploy this to production and

How to programmatically generate celerybeat entries with celery and Django

喜夏-厌秋 提交于 2019-12-20 09:37:49
问题 I am hoping to be able to programmatically generate celerybeat entries and resync celerybeat when entries are added. The docs here state By default the entries are taken from the CELERYBEAT_SCHEDULE setting, but custom stores can also be used, like storing the entries in an SQL database. So I am trying to figure out which classes i need to extend to be able to do this. I have been looking at celery scheduler docs and djcelery api docs but the documentation on what some of these methods do is

Celery & RabbitMQ running as docker containers: Received unregistered task of type '…'

*爱你&永不变心* 提交于 2019-12-20 09:37:18
问题 I am relatively new to docker, celery and rabbitMQ. In our project we currently have the following setup: 1 physical host with multiple docker containers running: 1x rabbitmq:3-management container # pull image from docker hub and install docker pull rabbitmq:3-management # run docker image docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit -p 8080:15672 -p 5672:5672 rabbitmq:3-management 1x celery container # pull docker image from docker hub docker pull celery # run celery

Django: How to automatically change a field's value at the time mentioned in the same object?

微笑、不失礼 提交于 2019-12-20 09:19:36
问题 I am working on a django project for racing event in which a table in the database has three fields. 1)Boolean field to know whether race is active or not 2)Race start time 3)Race end time While creating an object of it,the start_time and end_time are specified. How to change the value of boolean field to True when the race starts and to False when it ends? How to schedule these activities? 回答1: To automatically update a model field after a specific time, you can use Celery tasks. Step-1:

Django & Celery — Routing problems

旧街凉风 提交于 2019-12-20 08:51:25
问题 I'm using Django and Celery and I'm trying to setup routing to multiple queues. When I specify a task's routing_key and exchange (either in the task decorator or using apply_async() ), the task isn't added to the broker (which is Kombu connecting to my MySQL database). If I specify the queue name in the task decorator (which will mean the routing key is ignored), the task works fine. It appears to be a problem with the routing/exchange setup. Any idea what the problem could be? Here's the

How to set up celery workers on separate machines?

不想你离开。 提交于 2019-12-20 08:39:34
问题 I am new to celery.I know how to install and run one server but I need to distribute the task to multiple machines. My project uses celery to assign user requests passing to a web framework to different machines and then returns the result. I read the documentation but there it doesn't mention how to set up multiple machines. What am I missing? 回答1: My understanding is that your app will push requests into a queueing system (e.g. rabbitMQ) and then you can start any number of workers on

Celery + Django on Elastic Beanstalk causing error: <class 'xmlrpclib.Fault'>, <Fault 6: 'SHUTDOWN_STATE'>

僤鯓⒐⒋嵵緔 提交于 2019-12-20 06:29:36
问题 I've a Django 2 application deployed on AWS Elastic Beanstalk. I configured Celery in order to exec async tasks on the same machine. Since I added Celery, every time I redeploy my application eb deploy myapp-env I get the following error: ERROR: [Instance: i-0bfa590abfb9c4878] Command failed on instance. Return code: 2 Output: (TRUNCATED)... ERROR: already shutting down error: <class 'xmlrpclib.Fault'>, <Fault 6: 'SHUTDOWN_STATE'>: file: /usr/lib64/python2.7/xmlrpclib.py line: 800 error: