celery

celery框架

一曲冷凌霜 提交于 2019-12-03 02:56:23
Celery 官方 Celery 官网: http://www.celeryproject.org/ Celery 官方文档英文版: http://docs.celeryproject.org/en/latest/index.html Celery 官方文档中文版: http://docs.jinkan.org/docs/celery/ Celery架构  Celery的架构由三部分组成, 消息中间件(message broker )、任务执行单元(worker)和 任务执行结果存储( backend - task result store )组成。 消息中间件 Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis等等 任务执行单元 Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 任务结果存储 Task result store用来存储Worker执行的任务的结果,Celery支持以不同方式存储任务的结果,包括AMQP, redis等 使用场景 异步任务: 将耗时操作任务提交给Celery去异步执行,比如发送短信/邮件、消息推送、音视频处理等等 定时任务 :定时执行某件事情,比如每天数据统计 Celery的安装配置 pip install celery 消息中间件

celery 简介

时光总嘲笑我的痴心妄想 提交于 2019-12-03 02:56:13
Celery 官方 Celery 官网: http://www.celeryproject.org/ Celery 官方文档英文版: http://docs.celeryproject.org/en/latest/index.html Celery 官方文档中文版: http://docs.jinkan.org/docs/celery/ Celery架构 Celery的架构由三部分组成,消息中间件(message broker)、任务执行单元(worker)和 任务执行结果存储(backend - task result store)组成。 消息中间件 Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis等等 任务执行单元 Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 任务结果存储 Task result store用来存储Worker执行的任务的结果,Celery支持以不同方式存储任务的结果,包括AMQP, redis等 使用场景 异步任务:将耗时操作任务提交给Celery去异步执行,比如发送短信/邮件、消息推送、音视频处理等等 定时任务:定时执行某件事情,比如每天数据统计 Celery的安装配置 pip install celery 消息中间件:RabbitMQ

Celery node fail, on pidbox already using on restart

匿名 (未验证) 提交于 2019-12-03 02:56:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: i have celery running with rabbitmq broker. Today i have fail of celery node - it dont execute tasks and not respond on service celeryd stop command. After few repeats node stoped, but on start i get this message: [WARNING/MainProcess] celery@nodename ready. [WARNING/MainProcess] /home/ubuntu/virtualenv/project_1/local/lib/python2.7/site-packages/kombu/pidbox.py:73: UserWarning: A node named u'nodename' is already using this process mailbox! Maybe you forgot to shutdown the other node or did not do so properly? Or if you meant to start

How can I run a celery periodic task from the shell manually?

人走茶凉 提交于 2019-12-03 02:54:23
问题 I'm using celery and django-celery. I have defined a periodic task that I'd like to test. Is it possible to run the periodic task from the shell manually so that I view the console output? 回答1: Have you tried just running the task from the Django shell? You can use the .apply method of a task to ensure that it is run eagerly and locally. Assuming the task is called my_task in Django app myapp in a tasks submodule: $ python manage.py shell >>> from myapp.tasks import my_task >>> eager_result =

Django Celery Database for Models on Producer and Worker

匿名 (未验证) 提交于 2019-12-03 02:50:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I want to develop an application which uses Django as Fronted and Celery to do background stuff. Now, sometimes Celery workers on different machines need database access to my django frontend machine (two different servers). They need to know some realtime stuff and to run the django-app with python manage . py celeryd they need access to a database with all models available. Do I have to access my MySQL database through direct connection? Thus I have to allow user "my-django-app" access not only from localhost on my frontend

Celery 'Getting Started' not able to retrieve results; always pending

匿名 (未验证) 提交于 2019-12-03 02:50:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I've been trying to follow the Celery First Steps With Celery and Next Steps guides. My setup is Windows 7 64-bit, Anaconda Python 2.7 (32-bit), Installed Erlang 32-bit binaries, RabbitMQ server, and celery (with pip install celery ). Following the guide I created a proj folder with init .py, tasks.py, and celery.py. My init .py is empty. Here's celery.py: from __future__ import absolute_import from celery import Celery app = Celery('proj', broker='amqp://', backend='amqp://', include=['proj.tasks']) #Optional configuration, see the

Django Celery ConnectionError: Too many heartbeats missed

匿名 (未验证) 提交于 2019-12-03 02:49:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Question How can I solve the ConnectionError: Too many heartbeats missed from Celery? Example Error [2013-02-11 15:15:38,513: ERROR/MainProcess] Error in timer: ConnectionError('Too many heartbeats missed', None, None, None, '') Traceback (most recent call last): File "/app/.heroku/python/lib/python2.7/site-packages/celery/utils/timer2.py", line 97, in apply_entry entry() File "/app/.heroku/python/lib/python2.7/site-packages/celery/utils/timer2.py", line 51, in __call__ return self.fun(*self.args, **self.kwargs) File "/app/.heroku/python/lib

celery

风流意气都作罢 提交于 2019-12-03 02:48:29
Celery 官方 Celery 官网:http://www.celeryproject.org/ Celery 官方文档英文版:http://docs.celeryproject.org/en/latest/index.html Celery 官方文档中文版:http://docs.jinkan.org/docs/celery/    Celery架构 Celery的架构由三部分组成,消息中间件(message broker)、任务执行单元(worker)和 任务执行结果存储(backend - task result store)组成。 消息中间件 Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis等等 任务执行单元 Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 任务结果存储 Task result store用来存储Worker执行的任务的结果,Celery支持以不同方式存储任务的结果,包括AMQP, redis等 使用场景 异步任务:将耗时操作任务提交给Celery去异步执行,比如发送短信/邮件、消息推送、音视频处理等等 定时任务:定时执行某件事情,比如每天数据统计 Celery的安装配置 pip install celery 消息中间件:RabbitMQ

Celery架构

大城市里の小女人 提交于 2019-12-03 02:45:25
cerely是什么?   cerely被用来稍后执行某些代码,或者调度器调度这些代码。 Celery架构   Celery的架构由三部分组成,消息中间件(message broker)、任务执行单元(worker)和 任务执行结果存储(backend - task result store)组成。 消息中间件   Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis等等 任务执行单元   Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 任务结果存储   Task result store用来存储Worker执行的任务的结果,Celery支持以不同方式存储任务的结果,包括AMQP, redis等 总结: """ 1、celery框架自带socket,所以自身是一个独立运行的服务 2、启动celery服务,是来执行服务中的任务的,服务中带一个执行任务的对象,会执行准备就绪的任务,将执行任务的结果保存起来 3、celery框架由三部分组成:存放要执行的任务broker,执行任务的对象worker,存放任务结果的backend 4、安装的celery主体模块,默认只提供worker,要结合其他技术提供broker和backend(两个存储的单位) """ 工作的基本原理 1

Run a Scrapy spider in a Celery Task

匿名 (未验证) 提交于 2019-12-03 02:45:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: This is not working anymore , scrapy's API has changed. Now the documentation feature a way to " Run Scrapy from a script " but I get the ReactorNotRestartable error. My task: from celery import Task from twisted.internet import reactor from scrapy.crawler import Crawler from scrapy import log, signals from scrapy.utils.project import get_project_settings from .spiders import MySpider class MyTask(Task): def run(self, *args, **kwargs): spider = MySpider settings = get_project_settings() crawler = Crawler(settings) crawler.signals.connect