celery

Celery 4 not auto-discovering tasks

大憨熊 提交于 2019-12-06 05:04:01
I have a Django 1.11 and Celery 4.1 project, and I've configured it according to the setup docs . My celery_init.py looks like from __future__ import absolute_import import os from celery import Celery # set the default Django settings module for the 'celery' program. os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings.settings' app = Celery('myproject') app.config_from_object('django.conf:settings', namespace='CELERY') #app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) # does nothing app.autodiscover_tasks() # also does nothing print('Registering debug task...') @app.task(bind

Celery beat not starting EOFError('Ran out of input')

不羁的心 提交于 2019-12-06 04:28:33
Everything worked perfectly fine until: celery beat v3.1.18 (Cipater) is starting. __ - ... __ - _ Configuration -> . broker -> amqp://user:**@staging-api.user-app.com:5672// . loader -> celery.loaders.app.AppLoader . scheduler -> celery.beat.PersistentScheduler . db -> /tmp/beat.db . logfile -> [stderr]@%INFO . maxinterval -> now (0s) [2015-09-25 17:29:24,453: INFO/MainProcess] beat: Starting... [2015-09-25 17:29:24,457: CRITICAL/MainProcess] beat raised exception <class 'EOFError'>: EOFError('Ran out of input',) Traceback (most recent call last): File "/home/user/staging/venv/lib/python3.4

Django Celery Scheduling a manage.py command

烂漫一生 提交于 2019-12-06 04:05:36
I need to update the solr index on a schedule with the command: (env)$ ./manage.py update_index I've looked through the Celery docs and found info on scheduling, but haven't been able to find a way to run a django management command on a schedule and inside a virtualenv. Would this be better run on a normal cron? And if so how would I run it inside the virtualenv? Anyone have experience with this? Thanks for the help! To run your command periodically from a cron job, just wrap the command in a bash script that loads the virtualenv. For example, here is what we do to run manage.py commands:

How can I set up Celery to call a custom worker initialization?

让人想犯罪 __ 提交于 2019-12-06 03:41:37
问题 I am quite new to Celery and I have been trying to setup a project with 2 separate queues (one to calculate and the other to execute). So far, so good. My problem is that the workers in the execute queue need to instantiate a class with a unique object_id (one id per worker). I was wondering if I could write a custom worker initialization to initialize the object at start and keep it in memory until the worker is killed. I found a similar question on custom_task but the proposed solution does

Run Unittest On Main Django Database

狂风中的少年 提交于 2019-12-06 03:30:23
I'm looking for a way to run a full celery setup during django tests, asked in this other SO question After thinking about it, I think I could settle for running a unittest (it's more of an integration test) in which I run the test script against the main Django (development) database. Is there a way to write unittests, run them with Nose and do so against the main database? I imagine it would be a matter of telling Nose (or whatever other framework) about the django settings. I've looked at django-nose but wasn't able to find a way to tell it to use the main DB and not a test one. I don't

Celery Starting a Task when Other Tasks have Completed

血红的双手。 提交于 2019-12-06 03:26:49
I have 3 tasks in Celery.. celery_app.send_task('tasks.read_cake_recipes') celery_app.send_task('tasks.buy_ingredients') celery_app.send_task('tasks.make_cake') Both read_cake_recipes and buy_ingredients don't have any dependancies, however before the task make_cake can be run both read_cake_recipes and buy_ingredients need to have finished. make_cake can be run at ANYTIME after the first two have started. But make_cake has no idea if the other tasks have completed. So if read_cake_recipes or buy_ingredients takes too long, then make_cake fails miserably. Chaining tasks does not seem to work

Celery + SQLAlchemy : DatabaseError: (DatabaseError) SSL error: decryption failed or bad record mac

只谈情不闲聊 提交于 2019-12-06 03:05:40
Error in the title triggers sometimes when using celery with more than one worker on a postgresql db with SSL turned on. I'm in a flask + SQLAlchemy configuration As mentionned here : https://github.com/celery/celery/issues/634 the solution in the django-celery plugin was to simply dispose all db connection at the start of the task. In flask + SQLAlchemy configuration, doing this worked for me : from celery.signals import task_prerun @task_prerun.connect def on_task_init(*args, **kwargs): engine.dispose() in case you don't know what "engine" is and how to get it, see here : http://flask.pocoo

Daemonize Celerybeat in Elastic Beanstalk(AWS)

会有一股神秘感。 提交于 2019-12-06 01:38:51
问题 I am trying to run celerybeat as a daemon in Elastic beanstalk. Here is my config file: files: "/opt/python/log/django.log": mode: "000666" owner: ec2-user group: ec2-user content: | # Log file encoding: plain "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash # Get django environment variables celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%

how to get the queue in which a task was run - celery

谁说胖子不能爱 提交于 2019-12-06 01:23:35
I'm new using celery and have a question. I have this simple task: @app.task(name='test_install_queue') def test_install_queue(): return subprocess.call("exit 0",shell=True) and I am calling this task later in a test case like result = tasks.test_default_queue.apply_async(queue="install") The task run successfully in the queue install (because I am seeing it in the celery log, and it completes fine. But I would like to know a programmatically way of finding in which queue was the task test_install_queue run, from the object stored in result . Thank you! EDIT: I've changed the tasks to be like:

How can I minimise connections with django-celery when using CloudAMQP through dotcloud?

假装没事ソ 提交于 2019-12-05 23:59:12
After spending a few weeks getting django-celery-rabbitmq working on dotcloud I have discovered that dotcloud is no longer supporting rabbitmq. Instead they recommend CloudAMQP. So I've set up CloudAMQP as per the tutorials: http://docs.dotcloud.com/tutorials/python/django-celery/ http://docs.dotcloud.com/tutorials/more/cloudamqp/ http://www.cloudamqp.com/docs-dotcloud.html And the service works fine. However, even when I do not have any processes running, CloudAMQP says there are 3 connections. I had a look at their docs and they say ( http://www.cloudamqp.com/docs-python.html ) for celery it