celery

supervisord always returns exit status 127 at WebFaction

半城伤御伤魂 提交于 2019-12-05 14:54:59
I keep getting the following errors from supervisord at webFaction when tailing the log: INFO exited: my_app (exit status 127; not expected) INFO gave up: my_app entered FATAL state, too many start retries too quickly Here's my supervisord.conf: [unix_http_server] file=/home/btaylordesign/tmp/supervisord.sock [rpcinterface:supervisor] supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///home/btaylordesign/tmp/supervisord.sock [supervisord] logfile=/home/btaylordesign/tmp/supervisord.log logfile_maxbytes=50MB logfile_backups=5 loglevel

Django Celery Time Limit Exceeded?

拟墨画扇 提交于 2019-12-05 12:53:14
问题 I keep receiving this error... [2012-06-14 11:54:50,072: ERROR/MainProcess] Hard time limit (300s) exceeded for movies.tasks.encode_media[14cad954-26e2-4511-94ec-b17b9a4149bb] [2012-06-14 11:54:50,111: ERROR/MainProcess] Task movies.tasks.encode_media[bc173429-77ae-4c96-b987-75337f915ec5] raised exception: TimeLimitExceeded(300,) Traceback (most recent call last): File "/srv/virtualenvs/filmlib/local/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 370, in _on_hard

Celery task state depends on CELERY_TASK_RESULT_EXPIRES

。_饼干妹妹 提交于 2019-12-05 12:25:55
From what I have seen, the task state depends entirely on the value set for CELERY_TASK_RESULT_EXPIRES - if I check the task state within this interval after the task has finished executing, the state returned by: AsyncResult(task_id).state is correct. If not, the state will not be updated and will remain forever PENDING. Can anyone explain me why does this happen? Is this a feature or a bug? Why is the task state depending on the result expiry time, even if I am ignoring results? (Celery version: 3.0.23, result backend: AMQP) State and result is the same. The result backend was initially used

What are the django-celery (djcelery) tables for?

萝らか妹 提交于 2019-12-05 12:24:48
问题 When I run syncdb, I notice a lot of tables created like: djcelery_crontabschedule ... djcelery_taskstate django-kombu is providing the transport, so it can't be related to the actual queue. Even when I run tasks, I still see nothing populated in these tables. What are these tables used for? Monitoring purposes only -- if I enable it? If so, is it also true that if I do a lookup of AsyncResult(), I'm guessing that is actually looking up the task result via the django-kombu tables instead of

How can I automatically reload tasks modules with Celery daemon?

陌路散爱 提交于 2019-12-05 11:40:11
问题 I am using Fabric to deploy a Celery broker (running RabbitMQ) and multiple Celery workers with celeryd daemonized through supervisor . I cannot for the life of me figure out how to reload the tasks.py module short of rebooting the servers. /etc/supervisor/conf.d/celeryd.conf [program:celeryd] directory=/fab-mrv/celeryd environment=[RABBITMQ crendentials here] command=xvfb-run celeryd --loglevel=INFO --autoreload autostart=true autorestart=true celeryconfig.py import os ## Broker settings

Broadcast messages in celery

ⅰ亾dé卋堺 提交于 2019-12-05 11:05:42
I'm using celery and want to send broadcast task to couple of workers. I'm trying to do it like is described on http://docs.celeryproject.org/en/latest/userguide/routing.html#broadcast so I create simple app with task: @celery.task def do_something(value): print value and in app I made: from kombu.common import Broadcast CELERY_QUEUES = (Broadcast('broadcast_tasks'), ) CELERY_ROUTES = {'my_app.do_something': {'queue': 'broadcast_tasks'}} and then I was trying to send task to workers with: my_app.do_something.apply_async(['222'], queue='broadcast_tasks') or: my_app.do_something.apply_async([

Celery configure separate connection for producer and consumer

筅森魡賤 提交于 2019-12-05 11:00:20
We have an application setup on heroku, which uses celery to run background jobs. The celery app uses RabbitMQ as the broker. We used heroku’s RabbitMQ Bigwig add-on as AMQP message broker. This add-on specifies two separate url one optimized for producer and other optimized for consumer. Also, as per RabbitMQ documentation it is recommended to use separate connections for producer and consumer. Celery documentation does not provide a ways to specify connections separately to producer and consumer. Is there a way to specify two different broker urls in celery? Unfortunately, there isn't a

Celery design help: how to prevent concurrently executing tasks

◇◆丶佛笑我妖孽 提交于 2019-12-05 10:53:46
I'm fairly new to Celery/AMQP and am trying to come up with a task/queue/worker design to meet the following requirements. I have multiple types of "per-user" tasks: e.g., TaskA, TaskB, TaskC. Each of these "per-user" tasks read/write data for one particular user in the system. So at any given time, I might need to create tasks User1_TaskA, User1_TaskB, User1_TaskC, User2_TaskA, User2_TaskB, etc. I need to ensure that, for each user , no two tasks of any task type execute concurrently. I want a system in which no worker can execute User1_TaskA at the same time as any other worker is executing

初识celery

杀马特。学长 韩版系。学妹 提交于 2019-12-05 10:42:44
异步任务 异步任务是web开发中一个很常见的方法。对于一些耗时耗资源的操作,往往从主应用中 隔离,通过异步的方式执行。简而言之,做一个注册的功能,在用户使用邮箱注册成功之 后,需要给该邮箱发送一封激活邮件。如果直接放在应用中,则调用发邮件的过程会遇到 网络IO的阻塞,比好优雅的方式则是使用异步任务,应用在业务逻辑中触发一个异步任务。 Celery是一个异步任务的调度工具。它是Python写的库,但是它实现的通讯协议也可以 使用ruby,php,javascript等调用。异步任务除了消息队列的后台执行的方式,还是 一种则是跟进时间的计划任务。下面将会介绍如何使用celery实现这两种需求。 Celery broker 和 backend 开始了解celery的时 大专栏 初识celery 候一定会有redis、rabbitmq这样的词儿,必然会一头雾水,然而 这正是celery设计的玄妙之处,简单来说,rabbitmq是一个采用Erlang写的强大的消 息队列工具。在celery中可以扮演broker的角色。那么broker究竟是什么鬼呢? broker是一个消息传输的中间件,可以理解为一个邮箱。每当应用程序调用celery的异 步任务的时候,会向broker传递消息,而后celery的worker将会取到消息,进行对应的 程序执行。那么,这个邮箱可以看成是一个消息队列

Celery task history

十年热恋 提交于 2019-12-05 10:39:59
I am building a framework for executing tasks on top of Celery framework. I would like to see the list of recently executed tasks (for the recent 2-7 days). Looking on the API I can find app.backend object, but cannot figure out how to make a query to fetch tasks. For example I can use backends like Redis or database. I do not want to explicitly write SQL queries to database. Is there a way to work with task history/results with API? I tried to use Flower, but it can only handle events and cannot get history before its start. You need to keep the task results in a backend, for example Redis.