celery

Can't start Celery worker on Windows 10 with “PicklingError”

白昼怎懂夜的黑 提交于 2019-12-12 09:03:40
问题 I have a simple test code that runs successfully on Linux, but it won't run on my windows 10 x64 computer. When I tried to start a celery worker, it complained about the unrecoverable error: PicklingError. (Celery version: 3.1.20) In my celery config, I've set the serialization to 'json', but it still didn't help at all. CELERY_RESULT_SERIALIZER = 'json' CELERY_TASK_SERIALIZER = 'json' CELERY_ACCEPT_CONTENT = ['json'] Here is the full error message: [2016-02-09 15:11:48,532: ERROR/MainProcess

Celery task chain cancelling?

笑着哭i 提交于 2019-12-12 08:35:14
问题 I found that celery supports task chains: http://celery.readthedocs.org/en/latest/userguide/canvas.html#chains. Question is: how can I stop chain's execution in a task? For example, we got a chain of N items (N > 2). And in the second task we realize that we do not need all the rest tasks to be executed. What to do? 回答1: In newer versions of celery (3.1.6) you can revoke an entire chain by simply walking the chain and revoking each item in turn. # Build a chain for results from tasks import

Celery worker and command line args

限于喜欢 提交于 2019-12-12 08:35:06
问题 I am refactoring my code to use celery worker. Before I used to use argparse to pass command line args. e.g. if __name__ == "__main__": parser = argparse.ArgumentParser(description='Node') parser.add_argument('--environment', action="store", default='local', help="env e.g. production of development") environment = arg_options.environment But now I get this error. celery -A tasks worker --loglevel=info --environment local celery: error: no such option: --environment How can I add? I don't want

Celery: auto discovery does not find tasks module in app

岁酱吖の 提交于 2019-12-12 08:23:09
问题 I have the following setup with a fresh installed celery and django 1.4: settings.py: import djcelery djcelery.setup_loader() BROKER_HOST = 'localhost' BROKER_PORT = 5672 BROKER_USER = 'user' BROKER_PASSWORD = 'password' BROKER_VHOST = 'test' [...] INSTALLED_APPS = [ 'django.contrib.auth', 'django.contrib.admin', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.staticfiles', 'djcelery', 'south', 'compressor', 'testapp', ] testapp/tasks.py: from

Debugging djcelery's celeryd via pdb

↘锁芯ラ 提交于 2019-12-12 08:18:28
问题 Have anybody tried debugging celeryd worker using pdb? Whenever a breakpoint is encountered (either by running celeryd via pdb, or by pdb.set_trace() ), I hit the following error: Error while handling action event. Traceback (most recent call last): File "/home/jeeyo/workspace3/uwcr/subscriptions/tasks.py", line 79, in process_action_event func(action_event) File "/home/jeeyo/workspace3/uwcr/subscriptions/tasks.py", line 36, in new_user_email send_registration_email(username, new_user.get

Celery Task Priority

怎甘沉沦 提交于 2019-12-12 08:06:42
问题 I want to manage tasks using Celery. I want to have a single task queue (with concurrency 1) and be able to push tasks onto the queue with different priorities such that higher priority tasks will preempt the others. I am adding three tasks to a queue like so: add_tasks.py from tasks import example_task example_task.apply_async((1), priority=1) example_task.apply_async((2), priority=3) example_task.apply_async((3), priority=2) I have the following configuration: tasks.py from __future__

Django matching query does not exist after object save in Celery task

不问归期 提交于 2019-12-12 07:16:02
问题 I have the following code: @task() def handle_upload(title, temp_file, user_id): . . . photo.save() #if i insert here "photo2 = Photo.objects.get(pk=photo.pk)" it works, including the view function return photo.pk #view function def upload_status(request): task_id = request.POST['task_id'] async_result = AsyncResult(task_id) photo_id = async_result.get() if async_result.successful(): photo = Photo.objects.get(pk=photo_id) I use an ajax request to check for the uploaded file but after the

AMQP connection reset by peer, but celery connected

蓝咒 提交于 2019-12-12 05:37:43
问题 I have a flask app using Celery with RabbitMQ as the broker. I've followed the instructions in this answer to get started. I have two machines. Machine A, where RabbitMQ runs, sends tasks to be consumed by celery on Machine B. My Broker Url and Backend Result Url are the same: amqp://remote:***@12.345.678.999:5672/remote_host . Both machines have copies of the flask app on them. RabbitMQ has been configured so that user remote has all permissions granted ".* .* .*". All the communication

Django and Celery, AppRegisteredNotReady exception

喜欢而已 提交于 2019-12-12 04:26:58
问题 I'm trying to integrate Celery into my Django project. I've followed the Celery docs, and I can execute a simple Hello World task. But when I try to import my models into my task definitions, I am getting the AppRegisteredNotReady exception. I'm finding some older discussions around this exception, but nothing current. I'm probably missing something quite simple. Python 3.5, Django 1.9, Celery 3.1.23 Celery.py: from __future__ import absolute_import import os from celery import Celery from

Force stop celery workers running as a systemd service

一曲冷凌霜 提交于 2019-12-12 04:21:32
问题 How do I kill the workers when I reboot the server and get the same effect as the following statement: pkill -9 -f 'celery worker' From the celery documentation: If the worker won’t shut down after considerate time, for being stuck in an infinite-loop or similar, you can use the KILL signal to force terminate the worker: But I am starting as a systemd service and have the following config to start it using the following systemd unit file: [Unit] Description=Celery Service After=network.target