celery

Running Celery as root

喜你入骨 提交于 2019-12-03 05:06:22
问题 I need to run my Django along with Celery as root for access reasons. It says I need to set C_FORCE_ROOT environment variable. How/where do I set the environment variable? 回答1: You can set it to true like this: # export C_FORCE_ROOT="true" Then make sure it is set as an env. variable # echo $C_FORCE_ROOT true But make sure to make it permanent, as this will vanish with the next restart Have fun :) !! 回答2: 1st solution - Manually type command at terminal $ export C_FORCE_ROOT='true' 2nd

Work around celerybeat being a single point of failure

家住魔仙堡 提交于 2019-12-03 05:04:21
问题 I'm looking for recommended solution to work around celerybeat being a single point of failure for celery/rabbitmq deployment. I didn't find anything that made sense so far, by searching the web. In my case, once a day timed scheduler kicks off a series of jobs that could run for half a day or longer. Since there can only be one celerybeat instance, if something happens to it or the server that it's running on, critical jobs will not be run. I'm hoping there is already a working solution for

Celery Get List Of Registered Tasks

雨燕双飞 提交于 2019-12-03 04:57:02
Is there a way to get a list of registered tasks? I tried: celery_app.tasks.keys() Which only returns built in Celery tasks like celery.chord, celery.chain etc. from celery.task.control import inspect i = inspect() i.registered_tasks() This will give a dictionary of all workers & related registered tasks. from itertools import chain set(chain.from_iterable( i.registered_tasks().values() )) In case if you have multiple workers running same tasks or if you just need a set of all registered tasks, it does the job. Alternate Way: From terminal you can get a dump of registered tasks by using this

Capture Heroku SIGTERM in Celery workers to shutdown worker gracefully

天大地大妈咪最大 提交于 2019-12-03 04:52:50
I've done a ton of research on this, and I'm surprised I haven't found a good answer to this yet anywhere. I'm running a large application on Heroku, and I have certain celery tasks that run for a very long time processing, and at the end of the task save a result. Every time I redeploy on Heroku, it sends SIGTERM (and eventually, SIGKILL) and kills my running worker. I'm trying to find a way for the worker instance to shut itself down gracefully and re-queue itself for processing later so that eventually we can save the required result instead of losing the queued task. I cannot find a way

Django Celery: Admin interface showing zero tasks/workers

大憨熊 提交于 2019-12-03 04:48:28
问题 I've setup Celery with Django ORM as back-end. Trying to monitor what's going on behind the scene. I've started celeryd with -E flag python manage.py celeryd -E -l INFO -v 1 -f /path/to/celeryd.log Started celerycam with default snapshot frequency of 1 second. python mannage.py celerycam I can see the tasks being executed(in the celery log) and results being stored(data models periodically being changed by those tasks). However the Task/Worker pages in Django admin panel showing zero items.

Celery - How to send task from remote machine?

旧巷老猫 提交于 2019-12-03 04:48:24
问题 We have a server running celery workers and a Redis queue. The tasks are defined on that server. I need to be able to call these tasks from a remote machine. I know that it is done using send_task but I still haven't figured out HOW? How do I tell send_task where the queue is? Where do I pass connection params (or whatever needed)? I've been looking for hours and all I can find is this: from celery.execute import send_task send_task('tasks.add') Well, that means that I need celery on my

Why do CELERY_ROUTES have both a “queue” and a “routing_key”?

点点圈 提交于 2019-12-03 04:44:23
问题 My understanding of AMQP is that messages only have the following components: The message body The routing key The exchange Queues are attached to exchanges. Messages can't have any knowledge of queues. They just post to an exchange, and then based on the exchange type and routing key, the messages are routed to one or more queues. In Celery, the recommended way of routing tasks is through the CELERY_ROUTES setting. From the docs, CELERY_ROUTES is... A list of routers, or a single router used

Django, ImportError: cannot import name Celery, possible circular import?

偶尔善良 提交于 2019-12-03 04:43:27
问题 I went through this example here: http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html All my tasks are in files called tasks.py. After updating celery and adding the file from the example django is throwing the following error, no matter what I try: ImportError: cannot import name Celery Is the problem possibly caused by the following? app.autodiscover_tasks(settings.INSTALLED_APPS, related_name='tasks') Because it goes through all tasks.py files which all have the

How to make a celery task fail from within the task?

不打扰是莪最后的温柔 提交于 2019-12-03 04:20:58
Under some conditions, I want to make a celery task fail from within that task. I tried the following: from celery.task import task from celery import states @task() def run_simulation(): if some_condition: run_simulation.update_state(state=states.FAILURE) return False However, the task still reports to have succeeded: Task sim.tasks.run_simulation[9235e3a7-c6d2-4219-bbc7-acf65c816e65] succeeded in 1.17847704887s: False It seems that the state can only be modified while the task is running and once it is completed - celery changes the state to whatever it deems is the outcome (refer to this

Django Celery Database for Models on Producer and Worker

核能气质少年 提交于 2019-12-03 04:03:22
I want to develop an application which uses Django as Fronted and Celery to do background stuff. Now, sometimes Celery workers on different machines need database access to my django frontend machine (two different servers). They need to know some realtime stuff and to run the django-app with python manage.py celeryd they need access to a database with all models available. Do I have to access my MySQL database through direct connection? Thus I have to allow user "my-django-app" access not only from localhost on my frontend machine but from my other worker server ips? Is this the "right" way,