celery

How to start a task only when all other tasks have finished in Celery

北城以北 提交于 2019-12-04 05:51:59
问题 In Celery, I want to start a task only when all the other tasks have completed. I found some resources like this one : Celery Starting a Task when Other Tasks have Completed and Running a task after all tasks have been completed But I am quite new to celery and could not really understand the above (or many other resources for that matter). So I have defined a task as so in a tasks.py : @celapp.task() def sampleFun(arg1, arg2, arg3): # do something here and I call it as this : for x in xrange

Chain a celery task's results into a distributed group

做~自己de王妃 提交于 2019-12-04 05:21:34
Like in this other question , I want to create a celery group from a list that's returned by a celery task. The idea is that the first task will return a list, and the second task will explode that list into concurrent tasks for every item in the list. The plan is to use this while downloading content. The first task gets links from a website, and the second task is a chain that downloads the page, processes it, and then uploads it to s3. Finally, once all the subpages are done, the website is marked as done in our DB. Something like: chain( get_links_from_website.si('https://www.google.com'),

Daemonize Celerybeat in Elastic Beanstalk(AWS)

空扰寡人 提交于 2019-12-04 05:19:40
I am trying to run celerybeat as a daemon in Elastic beanstalk. Here is my config file: files: "/opt/python/log/django.log": mode: "000666" owner: ec2-user group: ec2-user content: | # Log file encoding: plain "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash # Get django environment variables celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'` celeryenv=${celeryenv%?} # Create

Incorrect user for supervisor'd celeryd

断了今生、忘了曾经 提交于 2019-12-04 05:07:16
I have some periodic tasks that I run with celery (daemonized by supervisord), but after trying to create a directory in the home dir for the user i setup for the supervisor'd process I got a "permission denied" error. After looking at the os.environ dict in a running celery task I noticed that the USER var is set to 'root' and not the user that I set up in my supervisord config for celery. This is what my /usr/local/etc/supervisord.conf looks like: [unix_http_server] file=/tmp/supervisor.sock chmod=0777 [supervisord] logfile=/var/log/supervisord.log pidfile=/var/run/supervisord.pid

Multiple Docker containers and Celery

爷,独闯天下 提交于 2019-12-04 04:53:11
We have the following structure of the project right now: Web-server that processes incoming requests from the clients. Analytics module that provides some recommendations to the users. We decided to keep these modules completely independent and move them to different docker containers. When a query from a user arrives to the web-server it sends another query to the analytics module to get the recommendations. For recommendations to be consistent we need to do some background calculations periodically and when, for instance, new users register within our system. Also some background tasks are

Retrieve result from 'task_id' in Celery from unknown task

陌路散爱 提交于 2019-12-04 04:33:13
How do I pull the result of a task if I do not know previously which task was performed? Here's the setup: Given the following source('tasks.py'): from celery import Celery app = Celery('tasks', backend="db+mysql://u:p@localhost/db", broker = 'amqp://guest:guest@localhost:5672//') @app.task def add(x,y): return x + y @app.task def mul(x,y): return x * y with RabbitMQ 3.3.2 running locally: marcs-mbp:sbin marcstreeter$ ./rabbitmq-server RabbitMQ 3.3.2. Copyright (C) 2007-2014 GoPivotal, Inc. ## ## Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ########## Logs: /usr/local/var/log

celery执行过程

老子叫甜甜 提交于 2019-12-04 04:26:57
1、 添加任务到app 2、 通过delay将任务加到broker里面,并拿到任务的id 3、 开启celery服务,执行broker里面的任务 然后将任务执行的结果扔到packend里面 4、 然后获取结果 来源: https://www.cnblogs.com/fan-1994716/p/11831152.html

Celery dynamic queue creation and routing

最后都变了- 提交于 2019-12-04 03:36:52
I'm trying to call a task and create a queue for that task if it doesn't exist then immediately insert to that queue the called task. I have the following code: @task def greet(name): return "Hello %s!" % name def run(): result = greet.delay(args=['marc'], queue='greet.1', routing_key='greet.1') print result.ready() then I have a custom router: class MyRouter(object): def route_for_task(self, task, args=None, kwargs=None): if task == 'tasks.greet': return {'queue': kwargs['queue'], 'exchange': 'greet', 'exchange_type': 'direct', 'routing_key': kwargs['routing_key']} return None this creates an

Replacing Celerybeat with Chronos

风格不统一 提交于 2019-12-04 03:29:26
How mature is Chronos ? Is it a viable alternative to scheduler like celery-beat? Right now our scheduling implements a periodic "heartbeat" task that checks of "outstanding" events and fires them if they are overdue. We are using python-dateutil 's rrule for defining this. We are looking at alternatives to this approach, and Chronos seems a very attactive alternative: 1) it would mitigate the necessity to use a heartbeat schedule task, 2) it supports RESTful submission of events with ISO8601 format, 3) has a useful interface for management, and 4) it scales. The crucial requirement is that

Get worker ID in Celery

血红的双手。 提交于 2019-12-04 03:28:49
I want to use Celery to run jobs on a GPU server with four Tesla cards. I run the Celery worker with a pool of four workers such that each card always runs one job. My problem is how to instruct the workers to each claim one GPU. Currently I rely on the assumption that the worker processes should all have contiguous process IDs: device_id = os.getpid() % self.ndevices However, I this is not guaranteed to always work, i.e. when worker processes get restarted over time. So ideally, I would like to get the ID of each worker directly. Can someone tell me if it is possible to inspect the worker