celery

How to combine Celery with asyncio?

烈酒焚心 提交于 2019-11-29 01:12:25
How can I create a wrapper that makes celery tasks look like asyncio.Task ? Or is there a better way to integrate Celery with asyncio ? @asksol, the creator of Celery, said this: : It's quite common to use Celery as a distributed layer on top of async I/O frameworks (top tip: routing CPU-bound tasks to a prefork worker means they will not block your event loop). But I could not find any code examples specifically for asyncio framework. John Moutafis That will be possible from Celery version 5.0 as stated in the official site: http://docs.celeryproject.org/en/4.0/whatsnew-4.0.html#preface The

Retrieve task result by id in Celery

喜欢而已 提交于 2019-11-29 01:07:26
I am trying to retreive the result of a task which has completed. This works from proj.tasks import add res = add.delay(3,4) res.get() 7 res.status 'SUCCESS' res.id '0d4b36e3-a503-45e4-9125-cfec0a7dca30' But I want to run this from another application. So I rerun python shell and try: from proj.tasks import add res = add.AsyncResult('0d4b36e3-a503-45e4-9125-cfec0a7dca30') res.status 'PENDING' res.get() # Error How can I retrieve the result? Demetris It works using AsyncResult . (see this answer ) So first create the task: from cel.tasks import add res = add.delay(3,4) res.status 'SUCCESS' res

Increase celery retry time each retry cycle

二次信任 提交于 2019-11-29 01:04:03
问题 I do retries with celery like in the Docs-Example: @task() def add(x, y): try: ... except Exception, exc: add.retry(exc=exc, countdown=60) # override the default and # retry in 1 minute How can I increase the retry-countdown everytime the retry occurs for this job - e.g. 60 seconds, 2 minutes, 4 minutes and so on until the MaxRetriesExceeded is raised? 回答1: Since version 4.2 you can use options autoretry_for and retry_backoff for this purposes, for example: @task(max_retries=10, autoretry_for

Retrieve a task result object, given a `task_id` in Celery

三世轮回 提交于 2019-11-29 00:58:42
I store the task_id from an celery.result.AsyncResult in a database and relate it to the item that the task affects. This allows me to perform a query to retrieve all the task_id s of tasks that relate to a specific item. So after retrieving the task_id from the database, how do I go about retrieving information about the task's state/result/etc? From the Celery FAQ : result = MyTask.AsyncResult(task_id) result.get() 来源: https://stackoverflow.com/questions/5544611/retrieve-a-task-result-object-given-a-task-id-in-celery

celery

旧街凉风 提交于 2019-11-29 00:37:37
Celery的安装配置 pip install celery 消息中间件:RabbitMQ/Redis app=Celery('任务名', broker='xxx', backend='xxx') Celery执行异步任务 基本使用 创建py文件:celery_app_task.py import celery import time broker = 'redis://127.0.0.1:6379/0' backend = 'redis://127.0.0.1:6379/1' app = celery.Celery('test',backend=backend,broker=broker) @app.task def add(x, y): time.sleep(1) return x + y 创建py文件:add_task.py,添加任务 from celery_app_task import add result = add.delay(4, 5) print(result.id) 创建py文件:run.py,执行任务,或者使用命令执行:celery worker -A celery_app_task -l info 注:windows下:celery worker -A celery_app_task -l info -P eventlet from celery_app

Unable to start Airflow worker/flower and need clarification on Airflow architecture to confirm that the installation is correct

一曲冷凌霜 提交于 2019-11-28 23:38:57
Running a worker on a different machine results in errors specified below. I have followed the configuration instructions and have sync the dags folder. I would also like to confirm that RabbitMQ and PostgreSQL only needs to be installed on the Airflow core machine and does not need to be installed on the workers (the workers only connect to the core). The specification of the setup is detailed below: Airflow core/server computer Has the following installed: Python 2.7 with airflow (AIRFLOW_HOME = ~/airflow) celery psycogp2 RabbitMQ PostgreSQL Configurations made in airflow.cfg: sql_alchemy

Python 并行分布式框架之 Celery

瘦欲@ 提交于 2019-11-28 23:16:49
Celery (芹菜)是基于Python开发的分布式任务队列。它支持使用任务队列的方式在分布的机器/进程/线程上执行任务调度。 架构设计 Celery的架构由三部分组成,消息中间件(message broker),任务执行单元(worker)和任务执行结果存储(task result store)组成。 消息中间件 Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括, RabbitMQ , Redis , MongoDB (experimental), Amazon SQS (experimental), CouchDB (experimental), SQLAlchemy (experimental),Django ORM (experimental), IronMQ 任务执行单元 Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 任务结果存储 Task result store用来存储Worker执行的任务的结果,Celery支持以不同方式存储任务的结果,包括AMQP, Redis,memcached, MongoDB,SQLAlchemy, Django ORM,Apache Cassandra, IronCache 另外, Celery还支持不同的并发和序列化的手段 并发 Prefork ,

Django and Celery - re-loading code into Celery after a change

为君一笑 提交于 2019-11-28 23:14:30
If I make a change to tasks.py while celery is running, is there a mechanism by which it can re-load the updated code? or do I have to shut Celery down a re-load? I read celery had an --autoreload argument in older versions, but I can't find it in the current version: celery: error: unrecognized arguments: --autoreload ChillarAnand Unfortunately --autoreload doesn't work and it is deprecated . You can use Watchdog which provides watchmedo a shell utilitiy to perform actions based on file events. pip install watchdog You can start worker with watchmedo auto-restart -- celery worker -l info -A

Celery creating a new connection for each task

人走茶凉 提交于 2019-11-28 22:32:26
问题 I'm using Celery with Redis to run some background tasks, but each time a task is called, it creates a new connection to Redis. I'm on Heroku and my Redis to Go plan allows for 10 connections. I'm quickly hitting that limit and getting a "max number of clients reached" error. How can I ensure that Celery queues the tasks on a single connection rather than opening a new one each time? EDIT - including the full traceback File "/app/.heroku/venv/lib/python2.7/site-packages/django/core/handlers

Celery: WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL)

那年仲夏 提交于 2019-11-28 22:31:45
I use Celery with RabbitMQ in my Django app (on Elastic Beanstalk) to manage background tasks and I daemonized it using Supervisor. The problem now, is that one of the period task that I defined is failing (after a week in which it worked properly), the error I've got is: [01/Apr/2014 23:04:03] [ERROR] [celery.worker.job:272] Task clean-dead-sessions[1bfb5a0a-7914-4623-8b5b-35fc68443d2e] raised unexpected: WorkerLostError('Worker exited prematurely: signal 9 (SIGKILL).',) Traceback (most recent call last): File "/opt/python/run/venv/lib/python2.7/site-packages/billiard/pool.py", line 1168, in