celery

Can't start Celery worker on Windows 10 with “PicklingError”

匿名 (未验证) 提交于 2019-12-03 09:05:37
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have a simple test code that runs successfully on Linux, but it won't run on my windows 10 x64 computer. When I tried to start a celery worker, it complained about the unrecoverable error: PicklingError. (Celery version: 3.1.20) In my celery config, I've set the serialization to 'json', but it still didn't help at all. CELERY_RESULT_SERIALIZER = 'json' CELERY_TASK_SERIALIZER = 'json' CELERY_ACCEPT_CONTENT = ['json'] Here is the full error message: [2016-02-09 15:11:48,532: ERROR/MainProcess] Unrecoverable error: PicklingError("Can't pickle

Celery works, but with flower doesn't work

匿名 (未验证) 提交于 2019-12-03 09:05:37
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have installed celery and RabitMQ and flower. I am able to browse to the flower port. I have the following simple worker that I can attach to celery and call from a python program: # -*- coding: utf-8 -*- """ Created on Sat Dec 12 16:37:33 2015 @author: idf """ from celery import Celery app = Celery('tasks', broker='amqp://guest@localhost//') @app.task def add(x, y): return x + y This program calls it # -*- coding: utf-8 -*- """ Created on Sat Dec 12 16:40:16 2015 @author: idf """ from tasks import add add.delay(36, 5) I start celery like

In Celery, how do I run a task, and then have that task run another task, and keep it going?

五迷三道 提交于 2019-12-03 09:02:15
#tasks.py from celery.task import Task class Randomer(Task): def run(self, **kwargs): #run Randomer again!!! return random.randrange(0,1000000) >>> from tasks import Randomer >>> r = Randomer() >>> r.delay() Right now, I run the simple task. And it returns a random number. But, how do I make it run another task , inside that task ? Paulo Scardine You can call other_task.delay() from inside Randomer.run ; in this case you may want to set Randomer.ignore_result = True (and other_task.ignore_result , and so on). Remember that celery tasks delay returns instantly, so if you don't put any limit or

handle `post_save` signal in celery

北慕城南 提交于 2019-12-03 09:01:05
问题 I have a rather long running task that needs to be executed after inserting or updating an specific model. I decided to use post_save signal instead of overriding save method to reduce coupling. Since Django signals are not asynchronous I had to do the long running job as a Celery task (which we already have in our stack). A simplified version of my signal handling function is as follows: @receiver(post_save, sender=MyModel) def my_model_post_save(sender, instance, **kwargs): handle_save_task

Celery: Worker with concurrency and reserved tasks only running 1 task

两盒软妹~` 提交于 2019-12-03 09:00:38
问题 Some of the tasks in my code were taking longer and longer to execute. Upon inspection I noticed that although I have my worker node set to concurrency 6, and 6 processes exist to 'do work', but only 1 task is shown under 'running tasks'. Here is a little visual proof: Here are the worker options: And here is the task tab for that worker with only 1 running process: I have found that if I restart celery, the concurrency is once again respected and i will see >1 running task, but after some

Celery 'module' object has no attribute 'app' when using Python 3

匿名 (未验证) 提交于 2019-12-03 08:57:35
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am going through Celery tutorial. They are using Python2 and I am trying to implement the same using python3. I have 2 files: celery_proj.py : from celery import Celery app = Celery( 'proj', broker='amqp://', backend='amqp://', include=['proj.tasks']) app.conf.update(Celery_TAST_RESULT_EXPIRES=3600,) if __name__ == '__main__': app.start() and tasks.py : from celery_proj import app @app.task def add(x, y): return x + y @app.task def mul(x, y): return x * y @app.task def xsum(numbers): return sum(numbers) When I try to run celery -A proj

Celery - Querying Sqlite DB during task

匿名 (未验证) 提交于 2019-12-03 08:57:35
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have a Python based flask app where I am using the Celery task queue to handle a set of e-mail tasks. I would like the Celery task to be able to query a sqlite database that I have tied into the whole app to pull in and use certain data but I keep getting this following error. If I pull out the one line in the celery task that queries my SQLite database, the task then executes without throwing this error so my assumption is that I am making a fundamental error regarding tying celery and my database together. [2015-07-18 21:36:25,168: ERROR

Celery dynamic tasks / hiding Celery implementation behind an interface

匿名 (未验证) 提交于 2019-12-03 08:54:24
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I am trying to figure out how to implement my asynchronous jobs with Celery, without tying them to the Celery implementation. If I have an interface that accepts objects to schedule, such as callables (Or an object that wraps a callable): ITaskManager ( Interface ): def schedule ( task ): #eventually run task And I might implement it with the treading module: ThreadingTaskManager ( object ) def schedule ( task ): Thread ( task ). start () # or similar But it seems this couldn't be done with celery, am I right? 回答1: Perhaps one,

Celery Result backend. DisabledBackend object has no attribute _get_task_meta_for

匿名 (未验证) 提交于 2019-12-03 08:48:34
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have configured celery and the backend: cleryapp = Celery( 'tasks_app', brocker='amqp://guest@localhost//', backend='db+postgresql://guest@localhost:5432' ) 'results' appears disabled when i start the worker, but I read on another question here that that's not the issue. The database is getting all the data correctly, but result = AsyncResult(task_id) raises AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for' 回答1: I found a more convenient way to do that. result = celery.AsyncResult(task_id) celery is the Celery

Route celery task to specific queue

匿名 (未验证) 提交于 2019-12-03 08:46:08
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have two separate celeryd processes running on my server, managed by supervisor . They are set to listen on separate queues as such: [program:celeryd1] command=/path/to/celeryd --pool=solo --queues=queue1 ... [program:celeryd2] command=/path/to/celeryd --pool=solo --queues=queue2 ... And my celeryconfig looks something like this: from celery.schedules import crontab BROKER_URL = "amqp://guest:guest@localhost:5672//" CELERY_DISABLE_RATE_LIMITS = True CELERYD_CONCURRENCY = 1 CELERY_IGNORE_RESULT = True CELERY_DEFAULT_QUEUE = 'default' CELERY