celery

'./manage.py runserver' restarts when celery map/reduce tasks are running; sometimes raises error with inner_run

扶醉桌前 提交于 2019-12-11 08:59:53
问题 I have a view in my django project that fires off a celery task. The celery task itself triggers a few map/reduce jobs via subprocess/fabric and the results of the hadoop job are stored on disk --- nothing is actually stored in the database. After the hadoop job has been completed, the celery task sends a django signal that it is done, something like this: # tasks.py from models import MyModel import signals from fabric.operations import local from celery.task import Task class

Celery unregistered task KeyError

别等时光非礼了梦想. 提交于 2019-12-11 08:30:32
问题 I start the worker by executing the following in the terminal: celery -A cel_test worker --loglevel=INFO --concurrency=10 -n worker1.%h Then I get a long looping error message stating that celery has received an unregistered task and has triggered: KeyError: 'cel_test.grp_all_w_codes.mk_dct' #this is the name of the task The problem with this is that cel_test.grp_all_w_codes.mk_dct doesn't exist. In fact there isn't even a module cel_test.grp_all_w_codes let alone the task mk_dct . There was

Connection refused for Redis on Heroku

一世执手 提交于 2019-12-11 08:09:11
问题 I'm trying to set up Redis on Heroku as a backend for Celery. I have it working locally but on Heroku, I get this error (after the celery task completes): ConnectionError: Error 111 connecting localhost:6379. Connection refused. From what I can tell from other answers, that would indicate that the redis server isn't online, though the REDISTOGO_URL seems to be configured properly. In settings.py: REDIS_URL = os.getenv('REDISTOGO_URL', 'redis://localhost:6379/0') In tasks.py: from celery

Trouble getting result from Celery queue

左心房为你撑大大i 提交于 2019-12-11 07:48:36
问题 I have been playing with Celery on Windows 7. Right now, I am going through the Next Steps tutorial: http://docs.celeryproject.org/en/latest/getting-started/next-steps.html I created a celery.py file: from __future__ import absolute_import from celery import Celery app = Celery('proj', broker='amqp://', backend='amqp://', include=['proj.tasks']) # app.conf.update( # CELERY_TASK_RESULT_EXPIRES=3600, # ) if __name__ == '__main__': app.start() Then I created a tasks.py file: from __future__

Celery Flower - how can i load previous catched tasks?

浪子不回头ぞ 提交于 2019-12-11 06:45:02
问题 I started to use celery flower for tasks monitoring and it is working like a charm. I have one concern though, how can i "reload" info about monitored tasks after flower restart ? I use redis as a broker, and i need to have option to check on tasks even in case of unexpected restart of service (or server). Thanks in advance 回答1: I found i out. It is the matter of setting the persistant flag in command running celery flower. 来源: https://stackoverflow.com/questions/22553659/celery-flower-how

Using group result in a Celery chain

∥☆過路亽.° 提交于 2019-12-11 06:32:20
问题 I'm stuck with a relatively complex celery chain configuration, trying to achieve the following. Assume there's a chain of tasks like the following: chain1 = chain( DownloadFile.s("http://someserver/file.gz"), # downloads file, returns temp file name UnpackFile.s(), # unpacks the gzip comp'd file, returns temp file name ParseFile.s(), # parses file, returns list URLs to download ) Now I want to download each URL in parallel, so what I did was: urls = chain1.get() download_tasks = map(lambda x

Same task executed multiple times

时光怂恿深爱的人放手 提交于 2019-12-11 06:01:26
问题 I have ETA tasks that get sent to a Redis broker for Celery. It is a single celery and redis instance, both int he same machine. The problem is, tasks are getting executed multiple times. I've seen tasks executed 4 to 11 times. I set up the visibility timeout to be 12 hours, given that my ETA's are between 4-11 hours (determined at runtime): BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 12 * 60 * 60} Even with that, tasks still get executed multiple times. Initially, the task in question

Can I review and delete Celery / RabbitMQ tasks individually?

北城余情 提交于 2019-12-11 05:11:54
问题 I am running Django + Celery + RabbitMQ. After modifying some task names I started getting "unregistered task" KeyErrors, even after removing tasks with this key from the Periodic tasks table in Django Celery Beat and restarting the Celery worker. It turns out Celery / RabbitMQ tasks are persistent. I eventually resolved the issue by reimplementing the legacy tasks as dummy methods. In future, I'd prefer not to purge the queue, restart the worker or reimplement legacy methods. Instead I'd

How to daemonize django celery periodic task on ubuntu server?

拜拜、爱过 提交于 2019-12-11 05:05:50
问题 On localhost, i used these statements to execute tasks and workers. Run tasks: python manage.py celery beat Run workers: python manage.py celery worker --loglevel=info I used otp, rabbitmq server and django-celery. It is working fine. I uploaded the project on ubuntu server. I would like to daemonize these. For that i created a file /etc/default/celeryd as below config settings. # Name of nodes to start, here we have a single node CELERYD_NODES="w1" # or we could have three nodes: #CELERYD

django + celery: disable prefetch for one worker, Is there a bug?

旧城冷巷雨未停 提交于 2019-12-11 05:05:40
问题 I have a Django project with celery Due to RAM limitations I can only run two worker processes. I have a mix of 'slow' and 'fast' tasks. Fast tasks shall be executed ASAP. There can be many fast tasks in a short time frame (0.1s - 3s), so ideally both CPUs should handle them. Slow tasks might run for a few minutes but the result can be delayed. Slow tasks occur less often, but it can happen that 2 or 3 are queued up at the same time. My idea was to have one: 1 celery worker W1 with