celery

Celery stop execution of a chain

坚强是说给别人听的谎言 提交于 2019-11-27 11:49:58
问题 I have a check_orders task that's executed periodically. It makes a group of tasks so that I can time how long executing the tasks took, and perform something when they're all done (this is the purpose of res.join [1] and grouped_subs) The tasks that are grouped are pairs of chained tasks. What I want is for when the first task doesn't meet a condition (fails) don't execute the second task in the chain. I can't figure this out for the life of me and I feel this is pretty basic functionality

How to write an Ubuntu Upstart job for Celery (django-celery) in a virtualenv

笑着哭i 提交于 2019-11-27 11:43:17
问题 I really enjoy using upstart. I currently have upstart jobs to run different gunicorn instances in a number of virtualenvs. However, the 2-3 examples I found for Celery upstart scripts on the interwebs don't work for me. So, with the following variables, how would I write an Upstart job to run django-celery in a virtualenv. Path to Django Project: /srv/projects/django_project Path to this project's virtualenv: /srv/environments/django_project Path to celery settings is the Django project

Daemonizing celery

你。 提交于 2019-11-27 10:11:11
问题 Following instructions found here, I copied the script from github into /etc/init.d/celeryd , then made it executable; $ ll /etc/init.d/celeryd -rwxr-xr-x 1 root root 9481 Feb 19 11:27 /etc/init.d/celeryd* I created config file /etc/default/celeryd as per instructions: # Names of nodes to start # most will only start one node: #CELERYD_NODES="worker1" # but you can also start multiple and configure settings # for each in CELERYD_OPTS (see `celery multi --help` for examples). CELERYD_NODES=

How to run celery as a daemon in production?

不问归期 提交于 2019-11-27 10:10:11
问题 i created a celeryd file in /etc/defaults/ from the code here: https://github.com/celery/celery/blob/3.0/extra/generic-init.d/celeryd Now when I want to run celeryd as a daemon and do this: sudo /etc/init.d/celerdy it says command not found. Where am I going wrong? 回答1: I am not sure what you are doing here but these are the steps to run celery as a daemon. The file that you have referred in the link https://github.com/celery/celery/blob/3.0/extra/generic-init.d/celeryd needs to be copied in

How to inspect and cancel Celery tasks by task name

流过昼夜 提交于 2019-11-27 09:53:25
问题 I'm using Celery (3.0.15) with Redis as a broker. Is there a straightforward way to query the number of tasks with a given name that exist in a Celery queue? And, as a followup, is there a way to cancel all tasks with a given name that exist in a Celery queue? I've been through the Monitoring and Management Guide and don't see a solution there. 回答1: # Retrieve tasks # Reference: http://docs.celeryproject.org/en/latest/reference/celery.events.state.html query = celery.events.state.tasks_by

supervisor踩坑记录

旧城冷巷雨未停 提交于 2019-11-27 09:47:07
线上一直以来都在用supervisor管理各项服务,感觉非常舒心,supervisor管理`gunicorn`和`celery`进程,web服务和异步任务各司其职,跑起来一直很稳定。 前段时间却不小心踩了一个小坑,本来以为是celery的问题,后来查了半天才发现,原来根源在supervisor这儿。 灵异事件的表现是这样的,一个小项目里有用到了异步任务,但是发送到异步队列的某一个任务,有时候可以执行成功,有时候失败,不稳定重现。 刚开始的时候怀疑任务本身出了问题,但有点说不通,因为并没有任何的报错被发送出来,连任务代码第一行的日志都没有被输出。另外,其他任务都是正常的,日志也是正常的,每一次的任务都能成功。 所以,目光又开始聚焦到了任务的调用方式上面,就先改成了同步试试,然后发现同步是可以成功的,这至少是排除了任务本身的问题了,任务函数是正确的。 这就奇怪了,难不成是调用的姿势不对吗,翻了一下celery的源码,发现了`apply_async`这个函数中有个`task_id`这个参数,正好我在调用的时候用了`delay`函数,且业务上的参数也是`task_id`,而`delay`函数里面就一句话,调用`apply_async`函数。 看到这个地方的时候,我感觉自己找到了问题的关键,这不就是参数名冲突了吗,改改参数名就好了,然后就把自己的`task_id`改成了`task_id_`

How do I restart celery workers gracefully?

Deadly 提交于 2019-11-27 09:36:34
问题 While issuing a new build to update code in workers how do I restart celery workers gracefully? Edit: What I intend to do is to something like this. Worker is running, probably uploading a 100 MB file to S3 A new build comes Worker code has changes Build script fires signal to the Worker(s) Starts new workers with the new code Worker(s) who got the signal after finishing the existing job exit. 回答1: The new recommended method of restarting a worker is documented in here http://docs

How do you unit test a Celery task?

丶灬走出姿态 提交于 2019-11-27 09:27:17
问题 The Celery documentation mentions testing Celery within Django but doesn't explain how to test a Celery task if you are not using Django. How do you do this? 回答1: It is possible to test tasks synchronously using any unittest lib out there. I normaly do 2 different test sessions when working with celery tasks. The first one (as I'm suggesting bellow) is completely synchronous and should be the one that makes sure the algorithm does what it should do. The second session uses the whole system

Why do we need message brokers like RabbitMQ over a database like PostgreSQL?

点点圈 提交于 2019-11-27 09:09:01
问题 I am new to message brokers like RabbitMQ which we can use to create tasks / message queues for a scheduling system like Celery. Now, here is the question: I can create a table in PostgreSQL which can be appended with new tasks and consumed by the consumer program like Celery. Why on earth would I want to setup a whole new tech for this like RabbitMQ? Now, I believe scaling cannot be the answer since our database like PostgreSQL can work in a distributed environment. I googled for what

Deleting all pending tasks in celery / rabbitmq

旧街凉风 提交于 2019-11-27 08:57:31
问题 How can I delete all pending tasks without knowing the task_id for each task? 回答1: From the docs: $ celery -A proj purge or from proj.celery import app app.control.purge() (EDIT: Updated with current method.) 回答2: For celery 3.0+: $ celery purge To purge a specific queue: $ celery -Q queue_name purge 回答3: For Celery 2.x and 3.x: When using worker with -Q parameter to define queues, for example celery worker -Q queue1,queue2,queue3 then celery purge will not work, because you cannot pass the