celeryd

Best way to map a generated list to a task in celery

拈花ヽ惹草 提交于 2019-11-29 21:06:47
问题 I am looking for some advice as to the best way to map a list generated from a task to another task in celery. Let's say I have a task called parse , which parses a PDF document and outputs a list of pages. Each page then needs to be individually passed to another task called feed . This all needs to go inside a task called process So, one way I could do that is this: @celery.task def process: pages = parse.s(path_to_pdf).get() feed.map(pages) Of course, that is not a good idea because I am

Increase celery retry time each retry cycle

二次信任 提交于 2019-11-29 01:04:03
问题 I do retries with celery like in the Docs-Example: @task() def add(x, y): try: ... except Exception, exc: add.retry(exc=exc, countdown=60) # override the default and # retry in 1 minute How can I increase the retry-countdown everytime the retry occurs for this job - e.g. 60 seconds, 2 minutes, 4 minutes and so on until the MaxRetriesExceeded is raised? 回答1: Since version 4.2 you can use options autoretry_for and retry_backoff for this purposes, for example: @task(max_retries=10, autoretry_for

How to restart Celery gracefully without delaying tasks

早过忘川 提交于 2019-11-28 20:05:14
问题 We use Celery with our Django webapp to manage offline tasks; some of these tasks can run up to 120 seconds. Whenever we make any code modifications, we need to restart Celery to have it reload the new Python code. Our current solution is to send a SIGTERM to the main Celery process ( kill -s 15 `cat /var/run/celeryd.pid` ), then to wait for it to die and restart it ( python manage.py celeryd --pidfile=/var/run/celeryd.pid [...] ). Because of the long-running tasks, this usually means the

Setting Time Limit on specific task with celery

我的梦境 提交于 2019-11-28 18:33:50
I have a task in Celery that could potentially run for 10,000 seconds while operating normally. However all the rest of my tasks should be done in less than one second. How can I set a time limit for the intentionally long running task without changing the time limit on the short running tasks? mher You can set task time limits ( hard and/or soft ) either while defining a task or while calling. from celery.exceptions import SoftTimeLimitExceeded @celery.task(time_limit=20) def mytask(): try: return do_work() except SoftTimeLimitExceeded: cleanup_in_a_hurry() or mytask.apply_async(args=[],

How do I restart celery workers gracefully?

戏子无情 提交于 2019-11-28 16:12:21
While issuing a new build to update code in workers how do I restart celery workers gracefully? Edit: What I intend to do is to something like this. Worker is running, probably uploading a 100 MB file to S3 A new build comes Worker code has changes Build script fires signal to the Worker(s) Starts new workers with the new code Worker(s) who got the signal after finishing the existing job exit. The new recommended method of restarting a worker is documented in here http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker $ celery multi start 1 -A proj -l info -c4 -

How to start a Celery worker from a script/module __main__?

二次信任 提交于 2019-11-28 05:27:17
I've define a Celery app in a module, and now I want to start the worker from the same module in its __main__ , i.e. by running the module with python -m instead of celery from the command line. I tried this: app = Celery('project', include=['project.tasks']) # do all kind of project-specific configuration # that should occur whenever this module is imported if __name__ == '__main__': # log stuff about the configuration app.start(['worker', '-A', 'project.tasks']) but now Celery thinks I'm running the worker without arguments: Usage: worker <command> [options] Show help screen and exit.

How can I recover unacknowledged AMQP messages from other channels than my connection's own?

白昼怎懂夜的黑 提交于 2019-11-27 17:58:32
It seems the longer I keep my rabbitmq server running, the more trouble I have with unacknowledged messages. I would love to requeue them. In fact there seems to be an amqp command to do this, but it only applies to the channel that your connection is using. I built a little pika script to at least try it out, but I am either missing something or it cannot be done this way (how about with rabbitmqctl?) import pika credentials = pika.PlainCredentials('***', '***') parameters = pika.ConnectionParameters(host='localhost',port=5672,\ credentials=credentials, virtual_host='***') def handle_delivery

Daemonizing celery

你。 提交于 2019-11-27 10:11:11
问题 Following instructions found here, I copied the script from github into /etc/init.d/celeryd , then made it executable; $ ll /etc/init.d/celeryd -rwxr-xr-x 1 root root 9481 Feb 19 11:27 /etc/init.d/celeryd* I created config file /etc/default/celeryd as per instructions: # Names of nodes to start # most will only start one node: #CELERYD_NODES="worker1" # but you can also start multiple and configure settings # for each in CELERYD_OPTS (see `celery multi --help` for examples). CELERYD_NODES=

How do I restart celery workers gracefully?

Deadly 提交于 2019-11-27 09:36:34
问题 While issuing a new build to update code in workers how do I restart celery workers gracefully? Edit: What I intend to do is to something like this. Worker is running, probably uploading a 100 MB file to S3 A new build comes Worker code has changes Build script fires signal to the Worker(s) Starts new workers with the new code Worker(s) who got the signal after finishing the existing job exit. 回答1: The new recommended method of restarting a worker is documented in here http://docs

Setting Time Limit on specific task with celery

对着背影说爱祢 提交于 2019-11-27 01:00:40
问题 I have a task in Celery that could potentially run for 10,000 seconds while operating normally. However all the rest of my tasks should be done in less than one second. How can I set a time limit for the intentionally long running task without changing the time limit on the short running tasks? 回答1: You can set task time limits (hard and/or soft) either while defining a task or while calling. from celery.exceptions import SoftTimeLimitExceeded @celery.task(time_limit=20) def mytask(): try: