django-celery

Where is the data provided by django-celery urls stored? How long is the data available? And what is the memory consumption?

跟風遠走 提交于 2019-12-06 16:04:46
问题 I am starting a project using django celery and I am making ajax calls to the task urls provided by 'djcelery.urls'. I would like to know a few things about this data: Where is that information being stored? Is it called from the djcelery tables in my django projects database or is it kept on the RabbitMQ server? My understanding of the djcelery tables in my database is that they are only for monitoring the usage using the camera. If it is being stored on the RabbitMQ server, how long will

Celery beat - different time zone per task

不羁岁月 提交于 2019-12-06 13:53:19
I am using celery beat to schedule some tasks. I'm able to use the CELERY_TIMEZONE setting to schedule the tasks using the crontab schedule and it runs at the scheduled time in the mentioned time zone. But I want to be able to setup multiple such tasks for different timezones in the same application (single django settings.py). I know which task needs to run in what timezone when the task is being scheduled. Is it possible to specify a different timezone for each of the tasks? I'm using django (1.4) with celery (3.0.11) and django celery (3.0.11). I've looked at the djcelery.schedulers

Django Celery and multiple databases (Celery, Django and RabbitMQ)

冷暖自知 提交于 2019-12-06 08:13:34
Is it possible to set a different database to be used with Django Celery? I have a project with multiple databases in configuration and don't want Django Celery to use the default one. I will be nice if I can still use django celery admin pages and read results stored in this different database :) It should be possible to set up a separate database for the django-celery models using Django database routers: https://docs.djangoproject.com/en/1.4/topics/db/multi-db/#automatic-database-routing I haven't tested this specifically with django-celery, but if it doesn't work for some reason, then it's

Django Celery Scheduling a manage.py command

烂漫一生 提交于 2019-12-06 04:05:36
I need to update the solr index on a schedule with the command: (env)$ ./manage.py update_index I've looked through the Celery docs and found info on scheduling, but haven't been able to find a way to run a django management command on a schedule and inside a virtualenv. Would this be better run on a normal cron? And if so how would I run it inside the virtualenv? Anyone have experience with this? Thanks for the help! To run your command periodically from a cron job, just wrap the command in a bash script that loads the virtualenv. For example, here is what we do to run manage.py commands:

How can I minimise connections with django-celery when using CloudAMQP through dotcloud?

假装没事ソ 提交于 2019-12-05 23:59:12
After spending a few weeks getting django-celery-rabbitmq working on dotcloud I have discovered that dotcloud is no longer supporting rabbitmq. Instead they recommend CloudAMQP. So I've set up CloudAMQP as per the tutorials: http://docs.dotcloud.com/tutorials/python/django-celery/ http://docs.dotcloud.com/tutorials/more/cloudamqp/ http://www.cloudamqp.com/docs-dotcloud.html And the service works fine. However, even when I do not have any processes running, CloudAMQP says there are 3 connections. I had a look at their docs and they say ( http://www.cloudamqp.com/docs-python.html ) for celery it

Stopping celery task gracefully

本小妞迷上赌 提交于 2019-12-05 23:44:20
I'd like to quit a celery task gracefully (i.e. not by calling revoke(celery_task_id, terminate=True) ). I thought I'd send a message to the task that sets a flag, so that the task function can return. What's the best way to communicate with a task? Cairnarvon Use signals for this. Celery's revoke is the right choice; it uses SIGTERM by default, but you can specify another using the signal argument, if you prefer. Just set a signal handler for it in your task (using the signal module ) that terminates the task gracefully. Antonio Cabanas Also you can use an AbortableTask . I think this is the

Celery Storing unrecoverable task failures for later resubmission

妖精的绣舞 提交于 2019-12-05 16:45:00
I'm using the djkombu transport for my local development, but I will probably be using amqp (rabbit) in production. I'd like to be able to iterate over failures of a particular type and resubmit. This would be in the case of something failing on a server or some edge case bug triggered by some new variation in data. So I could be resubmitting jobs up to 12 hours later after some bug is fixed or a third party site is back up. My question is: Is there a way to access old failed jobs via the result backend and simply resubmit them with the same params etc? You can probably access old jobs using:

Consumer Connection error with django and celery+rabbitmq?

自作多情 提交于 2019-12-05 15:59:38
问题 I'm trying to set up celeryd with django and rabbit-mq. So far, I've done the following: Installed celery from pip Installed rabbitmq via the debs available from their repository Added a user and vhost to rabbitmq via rabbitmqctl, as well as permissions for that user Started the rabbitmq-server Installed django-celery via pip Set up django-celery, including its tables Configured the various things in settings.py (BROKER_HOST, BROKER_PORT, BROKER_USER, BROKER_PASSWORD, BROKER_VHOST, as well as

Django Celery Time Limit Exceeded?

拟墨画扇 提交于 2019-12-05 12:53:14
问题 I keep receiving this error... [2012-06-14 11:54:50,072: ERROR/MainProcess] Hard time limit (300s) exceeded for movies.tasks.encode_media[14cad954-26e2-4511-94ec-b17b9a4149bb] [2012-06-14 11:54:50,111: ERROR/MainProcess] Task movies.tasks.encode_media[bc173429-77ae-4c96-b987-75337f915ec5] raised exception: TimeLimitExceeded(300,) Traceback (most recent call last): File "/srv/virtualenvs/filmlib/local/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 370, in _on_hard

What are the django-celery (djcelery) tables for?

萝らか妹 提交于 2019-12-05 12:24:48
问题 When I run syncdb, I notice a lot of tables created like: djcelery_crontabschedule ... djcelery_taskstate django-kombu is providing the transport, so it can't be related to the actual queue. Even when I run tasks, I still see nothing populated in these tables. What are these tables used for? Monitoring purposes only -- if I enable it? If so, is it also true that if I do a lookup of AsyncResult(), I'm guessing that is actually looking up the task result via the django-kombu tables instead of