django-celery

What are the django-celery (djcelery) tables for?

隐身守侯 提交于 2019-12-04 01:35:54
When I run syncdb, I notice a lot of tables created like: djcelery_crontabschedule ... djcelery_taskstate django-kombu is providing the transport, so it can't be related to the actual queue. Even when I run tasks, I still see nothing populated in these tables. What are these tables used for? Monitoring purposes only -- if I enable it? If so, is it also true that if I do a lookup of AsyncResult(), I'm guessing that is actually looking up the task result via the django-kombu tables instead of djcelery? Thanks. Mauro Rocco The celery task_state table, populated by the daemon celerycam, is just

Django Celery Time Limit Exceeded?

大兔子大兔子 提交于 2019-12-03 23:39:54
I keep receiving this error... [2012-06-14 11:54:50,072: ERROR/MainProcess] Hard time limit (300s) exceeded for movies.tasks.encode_media[14cad954-26e2-4511-94ec-b17b9a4149bb] [2012-06-14 11:54:50,111: ERROR/MainProcess] Task movies.tasks.encode_media[bc173429-77ae-4c96-b987-75337f915ec5] raised exception: TimeLimitExceeded(300,) Traceback (most recent call last): File "/srv/virtualenvs/filmlib/local/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 370, in _on_hard_timeout raise TimeLimitExceeded(hard_timeout) TimeLimitExceeded: 300 even though I have CELERYD_TASK_TIME

How to run a celery worker on AWS Elastic Beanstalk?

眉间皱痕 提交于 2019-12-03 19:42:11
问题 Versions: Django 1.9.8 celery 3.1.23 django-celery 3.1.17 Python 2.7 I'm trying to run my celery worker on AWS Elastic Beanstalk. I use Amazon SQS as a celery broker. Here is my settings.py INSTALLED_APPS += ('djcelery',) import djcelery djcelery.setup_loader() BROKER_URL = "sqs://%s:%s@" % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.replace('/', '%2F')) When I type the line below on terminal, it starts the worker on my local. Also I've created a few tasks and they're executed correctly. How

MongoEngine and dealing with “UserWarning: MongoClient opened before fork. Create MongoClient with connect=False, or create client after forking”

和自甴很熟 提交于 2019-12-03 16:08:27
I am using Celery and MongoEngine as part of my Django App with. I am getting this warning, when a celery @shared_task accesses the mongodb database via mongoengine model classes: UserWarning: MongoClient opened before fork. Create MongoClient with connect=False,or create client after forking. See PyMongo's documentation for details: http://api.mongodb.org/python/current/faq.html#using-pymongo-with-multiprocessing It clearly has something to do with multiprocessing and pyMongo that is that mongoengine is based on. My question is: What is the best strategy to avoid this issue with mongoengine?

Python Celery versus Threading Library for running async requests [closed]

℡╲_俬逩灬. 提交于 2019-12-03 15:43:51
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 5 years ago . I am running a python method that parses a lot of data. Since it is time intensive, I would like to run it asynchronously on a separate thread so the user can still access the website/UI. Do threads using the "from threading import thread" module terminate if a user exits the

Decorator after @task decorator in celery

こ雲淡風輕ζ 提交于 2019-12-03 13:16:57
I'm trying to apply a decorator after the celery @task decorator, something like. @send_email @task def any_function(): print "inside the function" I can get it to work in the way it is recommended in the docs, i.e. to put the decorator before the task decorator, but in this case I would like to access the task instance in my decorator. The @send_email would have to be a class decorator, this is what I tried without success: class send_email(object): ''' wraps a Task celery class ''' def __init__(self, obj): self.wrapped_obj = obj functools.update_wrapper(self, obj) def __call__(self, *args, *

Django Celery ConnectionError: Too many heartbeats missed

六眼飞鱼酱① 提交于 2019-12-03 13:03:56
问题 Question How can I solve the ConnectionError: Too many heartbeats missed from Celery? Example Error [2013-02-11 15:15:38,513: ERROR/MainProcess] Error in timer: ConnectionError('Too many heartbeats missed', None, None, None, '') Traceback (most recent call last): File "/app/.heroku/python/lib/python2.7/site-packages/celery/utils/timer2.py", line 97, in apply_entry entry() File "/app/.heroku/python/lib/python2.7/site-packages/celery/utils/timer2.py", line 51, in __call__ return self.fun(*self

celery task clean-up with DB backend

杀马特。学长 韩版系。学妹 提交于 2019-12-03 12:44:14
I'm trying to understand how and when tasks are cleaned up in celery. From looking at the task docs I see that: Old results will be cleaned automatically, based on the CELERY_TASK_RESULT_EXPIRES setting. By default this is set to expire after 1 day: if you have a very busy cluster you should lower this value. But this quote is from the RabbitMQ Result Backend section and I do not see any similar text in the Database Backend section. So my question is: is there a backend agnostic approach I can take for old task clean-up with celery and if not is there a DB Backend specific approach I should

Using celeryd as a daemon with multiple django apps?

泄露秘密 提交于 2019-12-03 11:06:54
问题 I'm just starting using django-celery and I'd like to set celeryd running as a daemon. The instructions, however, appear to suggest that it can be configured for only one site/project at a time. Can the celeryd handle more than one project, or can it handle only one? And, if this is the case, is there a clean way to set up celeryd to be automatically started for each configuration, which requiring me to create a separate init script for each one? 回答1: Like all interesting questions, the

How do I add authentication and endpoint to Django Celery Flower Monitoring?

白昼怎懂夜的黑 提交于 2019-12-03 10:38:23
I've been using flower locally and it seems easy enough to setup and run, but I can't see how I would set it up in a production environment. In particular, how can I add authentication and how would I define a url to access it? For custom address, use the --address flag. For auth, use the --basic_auth flag. See below: # celery flower --help Usage: /usr/local/bin/celery [OPTIONS] Options: --address run on the given address --auth regexp of emails to grant access --basic_auth colon separated user-password to enable basic auth --broker_api inspect broker e.g. http://guest:guest@localhost:15672