celery

celery在django中的使用

一世执手 提交于 2019-12-01 13:26:08
曾经有一个叫django-celery的模块,大家都用它来做django的异步任务。后来因为它对django、celery还有django-celery的版本要求太高了\ ,稍有不对就用不了,而且至今那个django-celery模块已经很长时间没更新过了,所以大家就都单独使用celery了。但在django中使用需要注意几点也是我遇见的几个坑,后面会讲到。 1.安装celery pip install celery==3.1.25 2.celery简介   Celery 是一个 基于python开发的分布式异步消息任务队列,通过它可以轻松的实现任务的异步处理, 如果你的业务场景中需要用到异步任务,就可以考虑使用celery   celery有以下优点: 简单:一单熟悉了celery的工作流程后,配置和使用还是比较简单的 高可用:当任务执行失败或执行过程中发生连接中断,celery 会自动尝试重新执行任务 快速:一个单进程的celery每分钟可处理上百万个任务 灵活: 几乎celery的各个组件都可以被扩展及自定制   celery组成结构     消息中间件(message broker)       Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis等     任务执行单元(worker)      

Restart celery beat and worker during Django deployment

女生的网名这么多〃 提交于 2019-12-01 12:33:04
问题 I am using celery==4.1.0 and django-celery-beat==1.1.0 . I am running gunicorn + celery + rabbitmq with Django. This is my config for creating beat and worker celery -A myproject beat -l info -f /var/log/celery/celery.log --detach celery -A myproject worker -l info -f /var/log/celery/celery.log --detach During Django deployment I am doing following: rm -f celerybeat.pid rm -f celeryd.pid celery -A myproject beat -l info -f /var/log/celery/celery.log --detach celery -A myproject worker -l info

How to test celery with django on a windows machine

眉间皱痕 提交于 2019-12-01 12:29:09
I'm looking for a resource, documentation or advise on how to test django celery on my windows machine before deploying on a Linux based server. Any useful Answer would be appreciated and accepted. Celery (since version 4 as pointed out by another answer) does not support Windows (source: http://docs.celeryproject.org/en/latest/faq.html#does-celery-support-windows ). Even so, you have some options: 1) Use task_always_eager=True . This will run your tasks synchronously – with this, you can verify that your code is doing what it's supposed to do. Running tasks synchronously works even on Windows

Celery revoke task before execute using django database

試著忘記壹切 提交于 2019-12-01 11:58:30
I'm using Django database instead of RabbitMQ for concurrency reasons. But I can't solve the problem of revoking a task before it execute. I found some answers about this matter but they don't seem complete or I can't get enough help. first answer second answer How can I extend celery task table using a model, add a boolean field (revoked) to set when I don't want the task to execute? Thanks. Since Celery tracks tasks by an ID, all you really need is to be able to tell which IDs have been canceled. Rather than modifying kombu internals, you can create your own table (or memcached etc) that

Celery beat with method tasks not working

寵の児 提交于 2019-12-01 11:04:16
I'm trying to run celerybeat on a method task, and can't get anything to work out properly. Here's an example setup: from celery.contrib.methods import task_method from celery import Celery, current_app celery=celery('tasks', broker='amqp://guest@localhost//') celery.config_from_object("celeryconfig") class X(object): @celery.task(filter=task_method, name="X.ppp") def ppp(self): print "ppp" and my celeryconfig.py file is from datetime import timedelta CELERYBEAT_SCHEDULE = { 'test' : { 'task' : 'X.ppp', 'schedule' : timedelta(seconds=5) }, } When I run celery beat , I'm getting errors like:

How to get the failed tasks in Celery?

青春壹個敷衍的年華 提交于 2019-12-01 10:12:39
I am using celery to process some tasks. I can see how many are active or scheduled etc, but I am not able to find any way to see the tasks that have failed. Flower does show me the status but only if it was running when the task was started and failed. Is there any command to get all the tasks that have failed (STATUS: FAILURE) ? I do have the task id when the task was created. But there are millions of them. So I can't check one by one even if there is a way to check it by task ID. But if there is such a command, please let me know. RichVel Celery doesn't make it easy to find a failed task

Celery在python中的单独使用

限于喜欢 提交于 2019-12-01 09:56:07
简单使用:    1.目录结构     -app_task.py     -worker.py     -result.py   2.在需要进行异步执行的文件app_task.py中导入celery,并实例化出一个对象,传入消息中间和数据存储配置参数 broker = 'redis://127.0.0.1:6379/1' # 使用redis第一个库 backend = 'redis://127.0.0.1:6379/2' # 使用redis第二个库 cel = celery.Celery('test',broker=broker,backend=backend)    3.在需要进行异步执行的函数上添加装饰器 @cle.task def add(x,y): return x+y   4.新建文件worker.py用来添加任务 from app_task import add result = add.delay(4,2) print(result) # 此结果不是add执行结果,而是放入消息队列中的一个任务id   5.新建文件result.py用来接收结果 from celery.result import AsyncResult from app_task import cel # 2e76b24d-364f-46b7-a0eb-f69f663dfb0d async1 =

How to get the failed tasks in Celery?

情到浓时终转凉″ 提交于 2019-12-01 09:07:22
问题 I am using celery to process some tasks. I can see how many are active or scheduled etc, but I am not able to find any way to see the tasks that have failed. Flower does show me the status but only if it was running when the task was started and failed. Is there any command to get all the tasks that have failed (STATUS: FAILURE) ? I do have the task id when the task was created. But there are millions of them. So I can't check one by one even if there is a way to check it by task ID. But if

Celery beat with method tasks not working

丶灬走出姿态 提交于 2019-12-01 08:35:31
问题 I'm trying to run celerybeat on a method task, and can't get anything to work out properly. Here's an example setup: from celery.contrib.methods import task_method from celery import Celery, current_app celery=celery('tasks', broker='amqp://guest@localhost//') celery.config_from_object("celeryconfig") class X(object): @celery.task(filter=task_method, name="X.ppp") def ppp(self): print "ppp" and my celeryconfig.py file is from datetime import timedelta CELERYBEAT_SCHEDULE = { 'test' : { 'task'

Celery is rerunning long running completed tasks over and over

狂风中的少年 提交于 2019-12-01 07:18:25
问题 I've a python celery-redis queue processing uploads and downloads worth gigs and gigs of data at a time. Few of the uploads takes upto few hours. However once such a task finishes, I'm witnessing this bizarre celery behaviour that the celery scheduler is rerunning the just concluded task again by sending it again to the worker (I'm running a single worker) And it just happened 2times on the same task! Can someone help me know why is this happening and how can I prevent it? The tasks are