celery

python3.7的celery报错TypeError: wrap_socket() got an unexpected keyword argument '_context'

匿名 (未验证) 提交于 2019-12-02 22:51:30
原启动方法为: 起执行任务的服务 elery worker -A celery_task -l info -P eventlet 起提交任务的服务 celery beat -A celery_task -l info 改变服务器启动方法不要用eventlet,加个参数 celery worker -A celery_task --loglevel=info --pool=solo 注意:celery_task是文件名,注意修改 来源:博客园 作者: luowen罗文 链接:https://www.cnblogs.com/luowenConnor/p/11482867.html

Why is RabbitMQ not persisting messages on a durable queue?

旧时模样 提交于 2019-12-02 22:20:35
I am using RabbitMQ with Django through Celery. I am using the most basic setup: # RabbitMQ connection settings BROKER_HOST = 'localhost' BROKER_PORT = '5672' BROKER_USER = 'guest' BROKER_PASSWORD = 'guest' BROKER_VHOST = '/' I imported a Celery task and queued it to run one year later. From the iPython shell: In [1]: from apps.test_app.tasks import add In [2]: dt=datetime.datetime(2012, 2, 18, 10, 00) In [3]: add.apply_async((10, 6), eta=dt) DEBUG:amqplib:Start from server, version: 8.0, properties: {u'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', u'product': 'RabbitMQ

Celery基本使用

匿名 (未验证) 提交于 2019-12-02 22:11:45
Celery Celery是一种简单/高效/灵活的即插即用的分布式任务队列. 需要异步处理的任务,发邮件/发短信/上传等耗时的操作.最终到达提升用户体验的目的. Celery主要是由Broker(中间人)和Worker(任务处理者)组成,执行流程为客户端发起任务--->Bocker接收任务,分配给--->Worker处理任务. pip install -U Celery broker指定消息队列保存的位置 backend指定执行结果保存的位置 from celery import Celery # 增加配置,redis为例 # 第一种 app = Celery('demo', backend='redis://:127.0.0.1:6379/2', broker='redis://:127.0.0.1:6379/1') # 第二种 app = Celery('demo') app.conf.update( broker_url='redis://:127.0.0.1:6379/1', result_backend='redis://:127.0.0.1:6379/2', ) # 第三种,导入.py模块,config中指定broker_url/result_backend app = Celery('demo') app.config_from_object('config') 1

How does a Celery worker consuming from multiple queues decide which to consume from first?

ぐ巨炮叔叔 提交于 2019-12-02 22:08:15
I am using Celery to perform asynchronous background tasks, with Redis as the backend. I'm interested in the behaviour of a Celery worker in the following situation: I am running a worker as a daemon using celeryd . This worker has been assigned two queues to consume through the -Q option: celeryd -E -Q queue1,queue2 How does the worker decide where to fetch the next task to consume from? Does it randomly consume a task from either queue1 or queue2 ? Will it prioritise fetching from queue1 because it is first in the list of arguments passed to -Q ? From my testing, it processes multiple queues

Simulating the passing of time in unittesting

家住魔仙堡 提交于 2019-12-02 21:35:16
I've built a paywalled CMS + invoicing system for a client and I need to get more stringent with my testing. I keep all my data in a Django ORM and have a bunch of Celery tasks that run at different intervals that makes sure that new invoices and invoice reminders get sent and cuts of access when users don't pay their invoices. For example I'd like to be a able to run a test that: Creates a new user and generates an invoice for X days of access to the site Simulates the passing of X + 1 days, and runs all the tasks I've got set up in Celery. Checks that a new invoice for an other X days has

Monitoring Celery, what should I use? [closed]

夙愿已清 提交于 2019-12-02 21:21:00
Closed. This question is off-topic. It is not currently accepting answers. Learn more . Want to improve this question? Update the question so it's on-topic for Stack Overflow. I'm using Django, Celery, and Django-Celery. I'd like to monitor the state/results of my tasks, but I'm a little confused on how to do that. Do I use ./manage.py celeryev , ./manage.py celerymon , ./manage.py celerycam ? Do I run sudo /etc/init.d/celeryevcam start ? Run: ./manage.py celeryd -E ./manage.py celerycam The first starts a worker with events enabled. Now you can find task results in the django admin interface.

django/celery: Best practices to run tasks on 150k Django objects?

可紊 提交于 2019-12-02 21:08:32
I have to run tasks on approximately 150k Django objects. What is the best way to do this? I am using the Django ORM as the Broker. The database backend is MySQL and chokes and dies during the task.delay() of all the tasks. Related, I was also wanting to kick this off from the submission of a form, but the resulting request produced a very long response time that timed out. I would also consider using something other than using the database as the "broker". It really isn't suitable for this kind of work. Though, you can move some of this overhead out of the request/response cycle by launching

Using Celery with existing RabbitMQ messages

…衆ロ難τιáo~ 提交于 2019-12-02 21:07:23
I have an existing RabbitMQ deployment that that a few Java applications are using the send out log messages as string JSON objects on various channels. I would like to use Celery to consume these messages and write them to various places (e.g. DB, Hadoop, etc.). I can see that Celery is design to be both the producer and consumer of RabbitMQ messages, since it tries to hide the mechanism by which those messages are delivered. Is there anyway to get Celery to consume messages created by another app and run jobs when they arrive? It's currently hard to add custom consumers to the celery workers

celery missed heartbeat (on_node_lost)

我只是一个虾纸丫 提交于 2019-12-02 20:54:03
I just upgraded to celery 3.1 and now I see this i my logs :: on_node_lost - INFO - missed heartbeat from celery@queue_name for every queue/worker in my cluster. According to the docs BROKER_HEARTBEAT is off by default and I haven't configured it. Should I explicitly set BROKER_HEARTBEAT=0 or is there something else that I should be checking? Saw the same thing, and noticed a couple of things in the log files. 1) There were messages about time drift at the start of the log and occasional missed heartbeats. 2) At the end of the log file, the drift messages went away and only the missed

Create dynamic queues with Celery

白昼怎懂夜的黑 提交于 2019-12-02 20:42:17
Here's my scenario: When a user logs in to my website, I queue up a bunch of tasks for the given user (typically each task takes 100s of msecs and there are 100s of tasks per user). These tasks are queued to the default Celery Queue and I have 100s of workers running. I use websockets to show the user real-time progress as the tasks complete on the backend. Life is good if I have just 1 or 2 users active. Now if I a few concurrent users log-in to my site, the latter users are queued behind the initial users and their tasks starve (since all the tasks go to the same queue). My thoughts are to