celery-task

Celery unregistered task KeyError

别等时光非礼了梦想. 提交于 2019-12-11 08:30:32
问题 I start the worker by executing the following in the terminal: celery -A cel_test worker --loglevel=INFO --concurrency=10 -n worker1.%h Then I get a long looping error message stating that celery has received an unregistered task and has triggered: KeyError: 'cel_test.grp_all_w_codes.mk_dct' #this is the name of the task The problem with this is that cel_test.grp_all_w_codes.mk_dct doesn't exist. In fact there isn't even a module cel_test.grp_all_w_codes let alone the task mk_dct . There was

Celery Flower - how can i load previous catched tasks?

浪子不回头ぞ 提交于 2019-12-11 06:45:02
问题 I started to use celery flower for tasks monitoring and it is working like a charm. I have one concern though, how can i "reload" info about monitored tasks after flower restart ? I use redis as a broker, and i need to have option to check on tasks even in case of unexpected restart of service (or server). Thanks in advance 回答1: I found i out. It is the matter of setting the persistant flag in command running celery flower. 来源: https://stackoverflow.com/questions/22553659/celery-flower-how

Using group result in a Celery chain

∥☆過路亽.° 提交于 2019-12-11 06:32:20
问题 I'm stuck with a relatively complex celery chain configuration, trying to achieve the following. Assume there's a chain of tasks like the following: chain1 = chain( DownloadFile.s("http://someserver/file.gz"), # downloads file, returns temp file name UnpackFile.s(), # unpacks the gzip comp'd file, returns temp file name ParseFile.s(), # parses file, returns list URLs to download ) Now I want to download each URL in parallel, so what I did was: urls = chain1.get() download_tasks = map(lambda x

Duplicated tasks after time change

风格不统一 提交于 2019-12-11 00:53:24
问题 I don't know exactly why, but I am getting duplicated tasks. I thing this may be related with time change of the last weekend (The clock was delayed for an hour in the system). The first task should not be executed, since I say explicitly hour=2 . Any idea why this happens? [2017-11-01 01:00:00,001: INFO/Beat] Scheduler: Sending due task every-first-day_month (app.users.views.websites_down) [2017-11-01 02:00:00,007: INFO/Beat] Scheduler: Sending due task every-first-day_month (app.users.views

Have Celery broadcast return results from all workers

天大地大妈咪最大 提交于 2019-12-10 19:31:41
问题 Is there a way to get all the results from every worker on a Celery Broadcast task? I would like to monitor if everything went ok on all the workers. A list of workers that the task was send to would also be appreciated. 回答1: No, that is not easily possible. But you don't have to limit yourself to the built-in amqp result backend, you can send your own results using Kombu (http://kombu.readthedocs.org), which is the messaging library used by Celery: from celery import Celery from kombu import

Celery Result error “args must be a list or tuple”

走远了吗. 提交于 2019-12-10 18:08:31
问题 I am running a Django website and have just gotten Celery to run, but I am getting confusing errors. Here is how the code is structured. In tests.py: from tasks import * from celery.result import AsyncResult project = Project.objects.create() # initalize various sub-objects of the project c = function.delay(project.id) r = AsyncResult(c.id).ready() f = AsyncResult(c.id).failed() # wait until the task is done while not r and not f: r = AsyncResult(c.id).ready() f = AsyncResult(c.id).failed()

Celery CRITICAL/MainProcess] Unrecoverable error: AttributeError(“'float' object has no attribute 'items'”,)

耗尽温柔 提交于 2019-12-09 14:48:19
问题 I've been running a flask application with a celery worker and redis in three separated docker containers without any issue. This is how I start it: celery worker -A app.controller.engine.celery -l info --concurrency=2 --pool eventlet Celery starts fine: -------------- celery@a828bd5b0089 v4.2.1 (windowlicker) ---- **** ----- --- * *** * -- Linux-4.9.93-linuxkit-aufs-x86_64-with 2018-11-15 16:06:59 -- * - **** --- - ** ---------- [config] - ** ---------- .> app: app.controller.engine

Test if a celery task is still being processed

纵然是瞬间 提交于 2019-12-09 05:37:46
问题 How can I test if a task (task_id) is still processed in celery? I have the following scenario: Start a task in a Django view Store the BaseAsyncResult in the session Shutdown the celery daemon (hard) so the task is not processed anymore Check if the task is 'dead' Any ideas? Can a lookup all task being processed by celery and check if mine is still there? 回答1: define a field (PickledObjectField) in your model to store the celery task: class YourModel(models.Model): . . celery_task =

Celery task function custom attributes

巧了我就是萌 提交于 2019-12-08 02:43:53
问题 I have a celery task function that looks like this- @task(base=MyBaseTask) @my_custom_decorator def my_task(*args, **kwargs): my_task.ltc.some_func() #fails - attribute ltc doesn't exist on the object and my_custom_decorator looks like this def my_custom_decorator (f): from functools import wraps ltc = SomeClass() @wraps(f) def _inner(*args, **kwargs): ret_obj = None try: f.task_cache = ltc ret_obj = f(*args, **kwargs) except Exception, e: raise return ret_obj _inner.ltc = ltc return _inner I

AsyncResult(task_id) returns “PENDING” state even after the task started

三世轮回 提交于 2019-12-07 03:22:46
问题 In the project, I try to poll task.state of a long running task and update its running status. It worked in the development, but it won't work when I move the project on production server. I kept getting 'PENDING' even I can see the task started on flower. However, I can still get the results updated when the task finished, which when task.state == 'SUCCESS'. I use python 2.6, Django 1.6 and Celery 3.1 in the production, result backend AMQP. @csrf_exempt def poll_state(request): data = 'Fail'