celeryd

Running celery as daemon does not create PID file (no permission issue)

可紊 提交于 2021-02-11 16:34:44
问题 I am trying to run celery (worker) as a daemon / service in Ubuntu server. I've follow their documentation (https://docs.celeryproject.org/en/stable/userguide/daemonizing.html) However, when I start the daemon it says: celery multi v5.0.4 (singularity) > Starting nodes... > worker1@ubuntuserver: OK But when I check the status it says: celery init v10.1. Using config script: /etc/default/celeryd celeryd down: no pidfiles found I've seen some info on the internet about permissions. But not sure

Celery Exception Handling

穿精又带淫゛_ 提交于 2020-01-22 20:55:17
问题 Suppose i have this task definition: def some_other_foo(input) raise Exception('This is not handled!') return input @app.task( bind=True, max_retries=5, soft_time_limit=20) def some_foo(self, someInput={}): response="" try: response = some_other_foo(someInput) except Exception as exc: self.retry(countdown=5, exc=exc) response="error" return response I have a problem that exception is not handled in some_foo, I get error instead of response="error", task is crashed and i get Traceback that

Celery node fail, on pidbox already using on restart

落爺英雄遲暮 提交于 2020-01-01 04:25:22
问题 i have celery running with rabbitmq broker. Today i have fail of celery node - it dont execute tasks and not respond on service celeryd stop command. After few repeats node stoped, but on start i get this message: [WARNING/MainProcess] celery@nodename ready. [WARNING/MainProcess] /home/ubuntu/virtualenv/project_1/local/lib/python2.7/site-packages/kombu/pidbox.py:73: UserWarning: A node named u'nodename' is already using this process mailbox! Maybe you forgot to shutdown the other node or did

Share memory areas between celery workers on one machine

十年热恋 提交于 2019-12-21 12:06:30
问题 I want to share small pieces of informations between my worker nodes (for example cached authorization tokens, statistics, ...) in celery. If I create a global inside my tasks-file it's unique per worker (My workers are processes and have a life-time of 1 task/execution). What is the best practice? Should I save the state externally (DB), create an old-fashioned shared memory (could be difficult because of the different pool implementations in celery)? Thanks in advance! 回答1: I finally found

Running multiple Django Celery websites on same server

十年热恋 提交于 2019-12-21 04:51:25
问题 I'm running multiple Django/apache/wsgi websites on the same server using apache2 virtual servers. And I would like to use celery, but if I start celeryd for multiple websites, all the websites will use the configuration (logs, DB, etc) of the last celeryd instance I started. Is there a way to use multiple Celeryd (one for each website) or one Celeryd for all of them? Seems like it should be doable, but I can't find out how. 回答1: This problem was a big headache, i didn't noticed @Crazyshezy

Running multiple Django Celery websites on same server

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-21 04:51:23
问题 I'm running multiple Django/apache/wsgi websites on the same server using apache2 virtual servers. And I would like to use celery, but if I start celeryd for multiple websites, all the websites will use the configuration (logs, DB, etc) of the last celeryd instance I started. Is there a way to use multiple Celeryd (one for each website) or one Celeryd for all of them? Seems like it should be doable, but I can't find out how. 回答1: This problem was a big headache, i didn't noticed @Crazyshezy

How does a Celery worker consuming from multiple queues decide which to consume from first?

痞子三分冷 提交于 2019-12-20 10:24:15
问题 I am using Celery to perform asynchronous background tasks, with Redis as the backend. I'm interested in the behaviour of a Celery worker in the following situation: I am running a worker as a daemon using celeryd . This worker has been assigned two queues to consume through the -Q option: celeryd -E -Q queue1,queue2 How does the worker decide where to fetch the next task to consume from? Does it randomly consume a task from either queue1 or queue2 ? Will it prioritise fetching from queue1

celery daemon production local config file without django

狂风中的少年 提交于 2019-12-18 07:04:07
问题 I am newbie to Celery. I create a project as per instruction provided by the celery4.1 docs.Below is my project folder and files: mycelery | test_celery | celery_app.py tasks.py __init__.py 1-celery_app.py from __future__ import absolute_import import os from celery import Celery from kombu import Queue, Exchange from celery.schedules import crontab import datetime app = Celery('test_celery', broker='amqp://jimmy:jimmy123@localhost/jimmy_v_host', backend='rpc://', include=['test_celery.tasks'

celeryd with RabbitMQ hangs on “mingle: searching for neighbors”, but plain celery works

痞子三分冷 提交于 2019-12-13 02:37:04
问题 I'm banging my head to the wall with celeryd and RabbitMQ. This example from tutorial is working just fine: from celery import Celery app = Celery('tasks', backend='amqp', broker='amqp://') @app.task def add(x, y): return x + y I run: celery -A tasks worker --loglevel=info And I get the output: [2014-11-18 19:47:58,874: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672// [2014-11-18 19:47:58,881: INFO/MainProcess] mingle: searching for neighbors [2014-11-18 19:47:59,889: INFO

Celery single task persistent data

瘦欲@ 提交于 2019-12-12 15:15:58
问题 Lets say a single task is enough for a machine to stay very busy for a few minutes. I want to get the result of the task, then depending on the result, have the worker perform the same task again. The question I cannot find an answer to is this: Can I keep data in memory on the worker machine in order to use it on the next task? 回答1: Yes you can. The documentation (http://docs.celeryproject.org/en/latest/userguide/tasks.html#instantiation) is a bit vague and I'm not sure if this is the best