celerybeat

how to detect failure and auto restart celery worker

只愿长相守 提交于 2021-02-05 08:35:59
问题 I use Celery and Celerybeat in my django powered website. the server OS is Ubuntu 16.04. by using celerybeat, a job is done by a celery worker every 10 minutes. sometimes the worker shuts down without any useful log messages or errors. So, I want to find a way in order to detect status (On/Off) of celery worker (not Beat), and if it's stopped, restart it automatically. how can I do that? thanks 回答1: In production, you should run Celery, Beat, your APP server etc. as daemons [1] using

How to register Celery task to specific worker?

一笑奈何 提交于 2021-01-04 07:19:22
问题 I am developing web application in Python/Django, and I have several tasks which are running in celery. I have to run task A one at a time so I have created worker with --concurrency=1 and routed task A to that worker using following command. celery -A proj worker -Q A -c 1 -l INFO Everything is working fine as this worker handle task A and other tasks are routed to default queue. But, above worker return all task when I use inspect command to get registered task for worker. That is

How to dynamically add a scheduled task to Celery beat

佐手、 提交于 2020-12-08 06:07:14
问题 Using Celery ver.3.1.23, I am trying to dynamically add a scheduled task to celery beat. I have one celery worker and one celery beat instance running. Triggering a standard celery task y running task.delay() works ok. When I define a scheduled periodic task as a setting in configuration, celery beat runs it. However what I need is to be able to add a task that runs at specified crontab at runtime. After adding a task to persistent scheduler, celery beat doesn't seem to detect the newly added

Django/Celery multiple queues on localhost - routing not working

大兔子大兔子 提交于 2020-08-21 10:28:19
问题 I followed celery docs to define 2 queues on my dev machine. My celery settings: CELERY_ALWAYS_EAGER = True CELERY_TASK_RESULT_EXPIRES = 60 # 1 mins CELERYD_CONCURRENCY = 2 CELERYD_MAX_TASKS_PER_CHILD = 4 CELERYD_PREFETCH_MULTIPLIER = 1 CELERY_CREATE_MISSING_QUEUES = True CELERY_QUEUES = ( Queue('default', Exchange('default'), routing_key='default'), Queue('feeds', Exchange('feeds'), routing_key='arena.social.tasks.#'), ) CELERY_ROUTES = { 'arena.social.tasks.Update': { 'queue': 'fs_feeds', }

Django/Celery multiple queues on localhost - routing not working

烂漫一生 提交于 2020-08-21 10:26:35
问题 I followed celery docs to define 2 queues on my dev machine. My celery settings: CELERY_ALWAYS_EAGER = True CELERY_TASK_RESULT_EXPIRES = 60 # 1 mins CELERYD_CONCURRENCY = 2 CELERYD_MAX_TASKS_PER_CHILD = 4 CELERYD_PREFETCH_MULTIPLIER = 1 CELERY_CREATE_MISSING_QUEUES = True CELERY_QUEUES = ( Queue('default', Exchange('default'), routing_key='default'), Queue('feeds', Exchange('feeds'), routing_key='arena.social.tasks.#'), ) CELERY_ROUTES = { 'arena.social.tasks.Update': { 'queue': 'fs_feeds', }

Celery beat + redis with password throws No Auth exception

笑着哭i 提交于 2020-06-26 07:46:00
问题 I am using celery and redis as two services in my docker setup. Configuration is as below: redis: image: redis:latest hostname: redis ports: - "0.0.0.0:6379:6379" command: --requirepass PASSWORD celeryworker: <<: *django depends_on: - redis - postgres command: "celery -E -A rhombus.taskapp worker --beat --scheduler redbeat.schedulers:RedBeatScheduler --loglevel INFO --uid taskmaster --concurrency=5" When I try to build my containers and schedule some jobs once the workers are ready I get an

Celery Beat: Limit to single task instance at a time

耗尽温柔 提交于 2020-02-26 18:23:05
问题 I have celery beat and celery (four workers) to do some processing steps in bulk. One of those tasks is roughly along the lines of, "for each X that hasn't had a Y created, create a Y." The task is run periodically at a semi-rapid rate (10sec). The task completes very quickly. There are other tasks going on as well. I've run into the issue multiple times in which the beat tasks apparently become backlogged, and so the same task (from different beat times) are executed simultaneously, causing

Celery Beat: Limit to single task instance at a time

有些话、适合烂在心里 提交于 2020-02-26 18:22:19
问题 I have celery beat and celery (four workers) to do some processing steps in bulk. One of those tasks is roughly along the lines of, "for each X that hasn't had a Y created, create a Y." The task is run periodically at a semi-rapid rate (10sec). The task completes very quickly. There are other tasks going on as well. I've run into the issue multiple times in which the beat tasks apparently become backlogged, and so the same task (from different beat times) are executed simultaneously, causing