celery

Distributed Celery scheduler

删除回忆录丶 提交于 2019-12-21 08:32:07
问题 I'm looking for a distributed cron-like framework for Python, and found Celery. However, the docs says "You have to ensure only a single scheduler is running for a schedule at a time, otherwise you would end up with duplicate tasks", Celery is using celery.beat.PersistentScheduler which store the schedule to a local file. So, my question, is there another implementation than the default that can put the schedule "into the cluster" and coordinate task execution so that each task is only run

How many CPU cores has a heroku dyno?

妖精的绣舞 提交于 2019-12-21 06:56:19
问题 I'm using Django with Celery 3.0.17 and now trying to figure out how many celery workers are run by default. From this link I understand that (not having modified this config) the number of workers must be currently equal to the number of CPU cores. And that's why I need the former. I wasn't able to find an official answer by googling or searching heroku's dev center. I think it's 4 cores as I'm seeing 4 concurrent connections to my AMQP server, but I wanted to confirm that. Thanks, J 回答1:

How do I create celery queues on runtime so that tasks sent to that queue gets picked up by workers?

走远了吗. 提交于 2019-12-21 05:57:20
问题 I'm using django 1.4, celery 3.0, rabbitmq To describe the problem, I have many content networks in a system and I want a queue for processing tasks related to each of these network. However content is created on the fly when the system is live and therefore I need to create queues on the fly and have existing workers start picking up on them. I've tried scheduling tasks in the following way (where content is a django model instance): queue_name = 'content.{}'.format(content.pk) # E.g. queue

How to dynamically schedule tasks in Django?

[亡魂溺海] 提交于 2019-12-21 05:24:06
问题 I need to build an app in Django that lets the user do some task everyday at the time they specify at runtime. I have looked at Celery but couldn't​ find anything that will help. I found apply_async and I can get the task to execute once at the specificied duration but not recurrent. I am missing something but don't know what. Please suggest how can I achieve this. 回答1: A simple solution which will prevent using heavy AMQP stack, preventing external dependencies like Celery , One thing you

Running multiple Django Celery websites on same server

十年热恋 提交于 2019-12-21 04:51:25
问题 I'm running multiple Django/apache/wsgi websites on the same server using apache2 virtual servers. And I would like to use celery, but if I start celeryd for multiple websites, all the websites will use the configuration (logs, DB, etc) of the last celeryd instance I started. Is there a way to use multiple Celeryd (one for each website) or one Celeryd for all of them? Seems like it should be doable, but I can't find out how. 回答1: This problem was a big headache, i didn't noticed @Crazyshezy

Running multiple Django Celery websites on same server

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-21 04:51:23
问题 I'm running multiple Django/apache/wsgi websites on the same server using apache2 virtual servers. And I would like to use celery, but if I start celeryd for multiple websites, all the websites will use the configuration (logs, DB, etc) of the last celeryd instance I started. Is there a way to use multiple Celeryd (one for each website) or one Celeryd for all of them? Seems like it should be doable, but I can't find out how. 回答1: This problem was a big headache, i didn't noticed @Crazyshezy

celery task clean-up with DB backend

懵懂的女人 提交于 2019-12-21 03:57:07
问题 I'm trying to understand how and when tasks are cleaned up in celery. From looking at the task docs I see that: Old results will be cleaned automatically, based on the CELERY_TASK_RESULT_EXPIRES setting. By default this is set to expire after 1 day: if you have a very busy cluster you should lower this value. But this quote is from the RabbitMQ Result Backend section and I do not see any similar text in the Database Backend section. So my question is: is there a backend agnostic approach I

How do I add authentication and endpoint to Django Celery Flower Monitoring?

瘦欲@ 提交于 2019-12-21 03:34:10
问题 I've been using flower locally and it seems easy enough to setup and run, but I can't see how I would set it up in a production environment. In particular, how can I add authentication and how would I define a url to access it? 回答1: For custom address, use the --address flag. For auth, use the --basic_auth flag. See below: # celery flower --help Usage: /usr/local/bin/celery [OPTIONS] Options: --address run on the given address --auth regexp of emails to grant access --basic_auth colon

Running Python's Celery on Elastic Beanstalk with Django

ⅰ亾dé卋堺 提交于 2019-12-21 02:45:12
问题 I'm considering a move to Elastic Beanstalk (on account of the pricing). The blockage is that I have no idea how to setup Celery on a python app (Django, in my case) deployed to the service. Has anyone managed to setup celery on Elastic Beanstalk? If so, please let me know how you managed to do it and what tools you used. 回答1: Use the SQS service. Read this: Celery with Amazon SQS and this: http://docs.celeryproject.org/en/latest/getting-started/brokers/sqs.html 来源: https://stackoverflow.com

Receiving events from celery task

牧云@^-^@ 提交于 2019-12-20 14:38:51
问题 I have a long running celery task which iterates over an array of items and performs some actions. The task should somehow report back which item is it currently processing so end-user is aware of the task's progress. At the moment my django app and celery seat together on one server, so I am able to use Django's models to report the status, but I am planning to add more workers which are away from Django, so they can't reach DB. Right now I see few solutions: Store intermediate results