celery

Running supervisord from the host, celery from a virtualenv (Django app)

百般思念 提交于 2019-12-05 23:55:57
I'm trying to use celery and redis queue to perform a task for my Django app. Supervisord is installed on the host via apt-get , whereas celery resides in a specific virtualenv on my system, installed via `pip. As a result, I can't seem to get the celery command to run via supervisord. If I run it from inside the virtualenv, it works fine, outside of it, it doesn't. How do I get it to run under my current set up? Is the solution simply to install celery via apt-get, instead of inside the virtualenv? Please advise. My celery.conf inside /etc/supervisor/conf.d is: [program:celery] command=/home

Celery异步任务与定时任务

笑着哭i 提交于 2019-12-05 23:48:59
一、什么是celery Celery是一个简单、灵活且可靠的,处理大量消息的分布式系统 专注于实时处理的异步任务队列 同时也支持任务调度 二、Celery架构 Celery的架构由三部分组成,消息中间件(message broker)、任务执行单元(worker)和 任务执行结果存储(task result store)组成。 2.1 消息中间件 Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis等等 2.2 任务执行单元 Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 2.3 任务结果存储 Task result store用来存储Worker执行的任务的结果,Celery支持以不同方式存储任务的结果,包括AMQP,Redis等 2.4 版本支持情况 Celery version 4.0 runs on     Python ❨2.7, 3.4, 3.5❩     PyPy ❨5.4, 5.5❩ This is the last version to support Python 2.7, and from the next version (Celery 5.x) Python 3.5 or newer is required. If you’re running an

分布式异步任务队列神器-Celery

瘦欲@ 提交于 2019-12-05 23:48:18
最近研究了下异步任务神器-Celery,发现非常好用,可以说是高可用,假如你发出一个任务执行命令给 Celery,只要 Celery 的执行单元 (worker) 在运行,那么它一定会执行;如果执行单元 (worker) 出现故障,如断电,断网情况下,只要执行单元 (worker) 恢复运行,那么它会继续执行你已经发出的命令。这一点有很强的实用价值:假如有交易系统接到了大量交易请求,主机却挂了,但前端用户仍可以继续发交易请求,发送交易请求后,用户无需等待。待主机恢复后,已发出的交易请求可以继续执行,只不过用户收到交易确认的时间延长而已,但并不影响用户体验。 Celery 简介 它是一个异步任务调度工具,用户使用 Celery 产生任务,借用中间人来传递任务,任务执行单元从中间人那里消费任务。任务执行单元可以单机部署,也可以分布式部署,因此 Celery 是一个高可用的生产者消费者模型的异步任务队列。你可以将你的任务交给 Celery 处理,也可以让 Celery 自动按 crontab 那样去自动调度任务,然后去做其他事情,你可以随时查看任务执行的状态,也可以让 Celery 执行完成后自动把执行结果告诉你。 应用场景: 高并发的请求任务。互联网已经普及,人们的衣食住行中产生的交易都可以线上进行,这就避免不了某些时间极高的并发任务请求,如公司中常见的购买理财、学生缴费

celery redis rabbitMQ各是什么及之间的区别?

試著忘記壹切 提交于 2019-12-05 23:46:03
Celery: Celery 是基于Python开发的分布式任务队列。它支持使用任务队列的方式在分布的机器/进程/线程上执行任务调度。 1、 celery工作流程: 消息中间件(message broker):Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis, MongoDB ,SQLAlchemy等,其中rabbitm与redis比较稳定,其他处于测试阶段。 任务执行单元(worker):Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 任务结果存储(result store):result store用来存储Worker执行的任务的结果,支持AMQP,redis,mongodb,mysql等主流数据库。 2、并发、序列化、压缩: celery任务并发执行支持prefork、eventlet、gevent、threads的方式; 序列化支持pickle,json,yaml,msgpack等; 压缩支持zlib, bzip2 。 3、celery使用中的一些建议和优化 (1)、如果你的broker使用的是rabbitmq,可安装一个C语言版的客户端librabbitmq来提升性能, pip install librabbitmq; (2)、通过 BROKER_POOL

Stopping celery task gracefully

本小妞迷上赌 提交于 2019-12-05 23:44:20
I'd like to quit a celery task gracefully (i.e. not by calling revoke(celery_task_id, terminate=True) ). I thought I'd send a message to the task that sets a flag, so that the task function can return. What's the best way to communicate with a task? Cairnarvon Use signals for this. Celery's revoke is the right choice; it uses SIGTERM by default, but you can specify another using the signal argument, if you prefer. Just set a signal handler for it in your task (using the signal module ) that terminates the task gracefully. Antonio Cabanas Also you can use an AbortableTask . I think this is the

Scheduling celery tasks with large ETA

℡╲_俬逩灬. 提交于 2019-12-05 22:37:39
I am currently experimenting with future tasks in celery using the ETA feature and a redis broker. One of the known issues with using a redis broker has to do with the visibility timeout : If a task isn’t acknowledged within the Visibility Timeout the task will be redelivered to another worker and executed. This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop. Some tasks that I can envision will have an ETA on the timescale of weeks/months. Setting the visibility

Django celery 使用

随声附和 提交于 2019-12-05 22:06:32
0.安装包 cachetools 3.1.1 celery 3.1.26.post2 celery-with-redis 3.0 certifi 2019.9.11 Django 2.2.6 django-allauth 0.40.0 django-appconf 1.0.3 django-celery 3.3.1 django-celery-results 1.0.0 django-compressor 2.3 django-contrib-comments 1.9.1 django-cors-headers 3.1.1 django-crispy-forms 1.8.0 django-environ 0.4.5 django-filter 2.2.0 django-fluent-comments 2.1 django-formtools 2.1 django-haystack 2.8.1 django-import-export 1.2.0 django-markdownx 2.0.28 django-redis 4.10.0 django-redis-cache 2.1.0 django-redis-sessions 0.6.1 django-rest-auth 0.9.5 django-rest-framework 0.1.0 django-reversion 3.0.4

Django Celery Periodic Tasks Run But RabbitMQ Queues Aren't Consumed

半世苍凉 提交于 2019-12-05 20:58:55
Question After running tasks via celery's periodic task scheduler, beat, why do I have so many unconsumed queues remaining in RabbitMQ? Setup Django web app running on Heroku Tasks scheduled via celery beat Tasks run via celery worker Message broker is RabbitMQ from ClouldAMQP Procfile web: gunicorn --workers=2 --worker-class=gevent --bind=0.0.0.0:$PORT project_name.wsgi:application scheduler: python manage.py celery worker --loglevel=ERROR -B -E --maxtasksperchild=1000 worker: python manage.py celery worker -E --maxtasksperchild=1000 --loglevel=ERROR settings.py CELERYBEAT_SCHEDULE = { 'do

Replacing Celerybeat with Chronos

拥有回忆 提交于 2019-12-05 20:26:06
问题 How mature is Chronos? Is it a viable alternative to scheduler like celery-beat? Right now our scheduling implements a periodic "heartbeat" task that checks of "outstanding" events and fires them if they are overdue. We are using python-dateutil's rrule for defining this. We are looking at alternatives to this approach, and Chronos seems a very attactive alternative: 1) it would mitigate the necessity to use a heartbeat schedule task, 2) it supports RESTful submission of events with ISO8601

Tornado celery can't use gen.Task or CallBack

回眸只為那壹抹淺笑 提交于 2019-12-05 19:04:35
class AsyncHandler(tornado.web.RequestHandler): @tornado.web.asynchronous def get(self): tasks.sleep.apply_async(args=[5], callback=self.on_result) def on_result(self, response): self.write(str(response.result)) self.finish() raise error : raise TypeError(repr(o) + " is not JSON serializable") TypeError: <bound method AsyncHandler.on_result of <__main__.AsyncHandler object at 0x10e7a19d0>> is not JSON serializable The broker and backends all use redis, I just copied from https://github.com/mher/tornado-celery When I use amqp broker and redis backends it works well, but not when using the redis