django-celery

How to run a celery worker with Django app scalable by AWS Elastic Beanstalk?

亡梦爱人 提交于 2019-11-28 05:06:00
How to use Django with AWS Elastic Beanstalk that would also run tasks by celery on main node only? This is how I set up celery with django on elastic beanstalk with scalability working fine. Please keep in mind that 'leader_only' option for container_commands works only on environment rebuild or deployment of the App. If service works long enough, leader node may be removed by Elastic Beanstalk. To deal with that, you may have to apply instance protection for your leader node. Check: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html#instance-protection

Retry Lost or Failed Tasks (Celery, Django and RabbitMQ)

自古美人都是妖i 提交于 2019-11-28 05:04:35
Is there a way to determine if any task is lost and retry it? I think that the reason for lost can be dispatcher bug or worker thread crash. I was planning to retry them but I'm not sure how to determine which tasks need to be retired? And how to make this process automatically? Can I use my own custom scheduler which will create new tasks? Edit: I found from the documentation that RabbitMQ never loose tasks, but what happens when worker thread crash in the middle of task execution? What you need is to set CELERY_ACKS_LATE = True Late ack means that the task messages will be acknowledged after

Retry Celery tasks with exponential back off

旧时模样 提交于 2019-11-28 03:18:45
For a task like this: from celery.decorators import task @task() def add(x, y): if not x or not y: raise Exception("test error") return self.wait_until_server_responds( if it throws an exception and I want to retry it from the daemon side, how can apply an exponential back off algorithm, i.e. after 2^2, 2^3,2^4 etc seconds? Also is the retry maintained from the server side, such that if the worker happens to get killed then next worker that spawns will take the retry task? asksol The task.request.retries attribute contains the number of tries so far, so you can use this to implement

celery task and customize decorator

别说谁变了你拦得住时间么 提交于 2019-11-27 21:26:08
I'm working on a project using django and celery(django-celery). Our team decided to wrap all data access code within (app-name)/manager.py (NOT wrap into Managers like the django way), and let code in (app-name)/task.py only dealing with assemble and perform tasks with celery(so we don't have django ORM dependency in this layer). In my manager.py , I have something like this: def get_tag(tag_name): ctype = ContentType.objects.get_for_model(Photo) try: tag = Tag.objects.get(name=tag_name) except ObjectDoesNotExist: return Tag.objects.none() return tag def get_tagged_photos(tag): ctype =

using class methods as celery tasks

时光怂恿深爱的人放手 提交于 2019-11-27 18:43:44
I'm trying to use the methods of class as the django-celery tasks, marking it up using @task decorator. The same situation is discribed here , asked by Anand Jeyahar. It's something like this class A: @task def foo(self, bar): ... def main(): a = A() ... # what i need a.foo.delay(bar) # executes as celery task a.foo(bar) # executes locally The problem is even if i use class instance like this a.foo.delay(bar) it says, that foo needs at least two arguments, which meens that self pointer misses. More information: I can't convert class to module because of inheritance Methods are strongly

Django Celery Logging Best Practice

不打扰是莪最后的温柔 提交于 2019-11-27 17:47:42
I'm trying to get Celery logging working with Django . I have logging set-up in settings.py to go to console (that works fine as I'm hosting on Heroku ). At the top of each module, I have: import logging logger = logging.getLogger(__name__) And in my tasks.py, I have: from celery.utils.log import get_task_logger logger = get_task_logger(__name__) That works fine for logging calls from a task and I get output like this: 2012-11-13T18:05:38+00:00 app[worker.1]: [2012-11-13 18:05:38,527: INFO/PoolWorker-2] Syc feed is starting But if that task then calls a method in another module, e.g. a

Django and Celery - re-loading code into Celery after a change

帅比萌擦擦* 提交于 2019-11-27 14:47:54
问题 If I make a change to tasks.py while celery is running, is there a mechanism by which it can re-load the updated code? or do I have to shut Celery down a re-load? I read celery had an --autoreload argument in older versions, but I can't find it in the current version: celery: error: unrecognized arguments: --autoreload 回答1: Unfortunately --autoreload doesn't work and it is deprecated. You can use Watchdog which provides watchmedo a shell utilitiy to perform actions based on file events. pip

Celery stop execution of a chain

坚强是说给别人听的谎言 提交于 2019-11-27 11:49:58
问题 I have a check_orders task that's executed periodically. It makes a group of tasks so that I can time how long executing the tasks took, and perform something when they're all done (this is the purpose of res.join [1] and grouped_subs) The tasks that are grouped are pairs of chained tasks. What I want is for when the first task doesn't meet a condition (fails) don't execute the second task in the chain. I can't figure this out for the life of me and I feel this is pretty basic functionality

How to write an Ubuntu Upstart job for Celery (django-celery) in a virtualenv

笑着哭i 提交于 2019-11-27 11:43:17
问题 I really enjoy using upstart. I currently have upstart jobs to run different gunicorn instances in a number of virtualenvs. However, the 2-3 examples I found for Celery upstart scripts on the interwebs don't work for me. So, with the following variables, how would I write an Upstart job to run django-celery in a virtualenv. Path to Django Project: /srv/projects/django_project Path to this project's virtualenv: /srv/environments/django_project Path to celery settings is the Django project

Django-Celery progress bar

余生颓废 提交于 2019-11-27 10:10:06
问题 I use: Celery Django-Celery RabbitMQ I can see all my tasks in the Django admin page, but at the moment it has just a few states, like: RECEIVED RETRY REVOKED SUCCESS STARTED FAILURE PENDING It's not enough information for me. Is it possible to add more details about a running process to the admin page? Like progress bar or finished jobs counter etc. I know how to use the Celery logging function, but a GUI is better in my case for some reasons. So, is it possible to send some tracing