celery

Celery single task persistent data

瘦欲@ 提交于 2019-12-12 15:15:58
问题 Lets say a single task is enough for a machine to stay very busy for a few minutes. I want to get the result of the task, then depending on the result, have the worker perform the same task again. The question I cannot find an answer to is this: Can I keep data in memory on the worker machine in order to use it on the next task? 回答1: Yes you can. The documentation (http://docs.celeryproject.org/en/latest/userguide/tasks.html#instantiation) is a bit vague and I'm not sure if this is the best

Django Celery - How to start a task with a delay of n - seconds - countdown flag is ignored

▼魔方 西西 提交于 2019-12-12 14:11:21
问题 In my Django project I'm running some asynchronous tasks using Celery (docs), Django-Celery and RabbitMQ as the broker. Whereas it works in general, I have two problems with my setup: a) the task execution seems to be joined with my request thread. Thus the user http request seems to wait until the task has been executed b) the task execution seems to ignore the countdown flag For testing purposes I have setup a simple TestTask: from celery.task import Task from celery.registry import tasks

In Celery Task Queue, is running tasks in a group any different than multiple asyncs in a loop?

笑着哭i 提交于 2019-12-12 12:40:28
问题 Let's say I have a very simple task like this: @celery.task(ignore_result=True) def print_page(page): with open('path/to/page','w') as f: f.write(page) (Please ignore the potential race condition in the above code... this is a simplified example) My question is whether the following two code samples would produce identical results, or if one is better than the other: Choice A: @celery.task(ignore_result=True) def print_pages(page_generator): for page in page_generator: print_page.s(page)

Celery not running chord callback

烈酒焚心 提交于 2019-12-12 12:29:38
问题 After looking at a lot of articles about chord callbacks not executing and trying their solutions, I am still unable to get it to work. In fact, the chord_unlock method is also not getting executed for some reason. celery.py from __future__ import absolute_import from celery import Celery app = Celery('sophie', broker='redis://localhost:6379/2', backend='redis://localhost:6379/2', include=['sophie.lib.chord_test']) app.conf.update( CELERY_ACCEPT_CONTENT=["json"], CELERY_TASK_SERIALIZER="json"

How do I capture Celery tasks during unit testing?

南楼画角 提交于 2019-12-12 11:45:14
问题 How can I capture without running Celery tasks created during a unit test? For example, I'd like to write a test which looks something like this: def test_add_user_avatar(): add_user_avatar(…) tasks = get_deferred_tasks(…) assert_equal(tasks[0], ResizeImageTask(…)) Specifically, I do not want to use ALWAYS_EAGER — some of my tasks are quite slow, and have their own set of tests cases. I specifically want to assert that the correct tasks are being created by my front-end code. 回答1: My

Django 1.6 + RabbitMQ 3.2.3 + Celery 3.1.9 - why does my celery worker die with: WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV)

核能气质少年 提交于 2019-12-12 11:20:53
问题 This seems to address a very similar issue, but doesn't give me quite enough insight: https://github.com/celery/billiard/issues/101 Sounds like it might be a good idea to try a non-SQLite database... I have a straightforward celery setup with my django app. In my settings.py file I set a task to run as follows: CELERYBEAT_SCHEDULE = { 'sync_database': { 'task': 'apps.data.tasks.celery_sync_database', 'schedule': timedelta(minutes=5) } } I have followed the instructions here: http://celery

How to remove timestamps from celery pprint output?

ⅰ亾dé卋堺 提交于 2019-12-12 11:14:10
问题 When running the celery worker then each line of the output of the pprint is always prefixed by the timestamp and also is being stripped. This makes it quite unreadable: [2015-11-05 16:01:12,122: WARNING/Worker-2] { [2015-11-05 16:01:12,122: WARNING/Worker-2] u'key1' [2015-11-05 16:01:12,122: WARNING/Worker-2] : [2015-11-05 16:01:12,122: WARNING/Worker-2] 'value1' [2015-11-05 16:01:12,122: WARNING/Worker-2] , u'_id': [2015-11-05 16:01:12,122: WARNING/Worker-2] ObjectId(

Celery: No Result Backend Configured?

只谈情不闲聊 提交于 2019-12-12 10:54:03
问题 I am trying to check celery results from the command line but get a No Result Backend Configured error. I have setup redis as my result backend and am now at a loss. I have the celery app setup like so: qflow/celery.py : os.environ.setdefault('CELERY_CONFIG_MODULE', 'qflow.celeryconfig') app = Celery( 'qflow', include=['qflow.tasks'] ) app.config_from_envvar('CELERY_CONFIG_MODULE') The config module ( qflow/celeryconfig.py ) looks like so: broker_url = 'redis://localhost:6379/0' result

Stopping celery task gracefully

一曲冷凌霜 提交于 2019-12-12 09:52:01
问题 I'd like to quit a celery task gracefully (i.e. not by calling revoke(celery_task_id, terminate=True) ). I thought I'd send a message to the task that sets a flag, so that the task function can return. What's the best way to communicate with a task? 回答1: Use signals for this. Celery's revoke is the right choice; it uses SIGTERM by default, but you can specify another using the signal argument, if you prefer. Just set a signal handler for it in your task (using the signal module) that

Tornado celery can't use gen.Task or CallBack

自作多情 提交于 2019-12-12 09:41:10
问题 class AsyncHandler(tornado.web.RequestHandler): @tornado.web.asynchronous def get(self): tasks.sleep.apply_async(args=[5], callback=self.on_result) def on_result(self, response): self.write(str(response.result)) self.finish() raise error : raise TypeError(repr(o) + " is not JSON serializable") TypeError: <bound method AsyncHandler.on_result of <__main__.AsyncHandler object at 0x10e7a19d0>> is not JSON serializable The broker and backends all use redis, I just copied from https://github.com