celery

Why can't it find my celery config file?

夙愿已清 提交于 2019-12-03 03:47:26
问题 /home/myuser/mysite-env/lib/python2.6/site-packages/celery/loaders/default.py:53: NotConfigured: No celeryconfig.py module found! Please make sure it exists and is available to Python. NotConfigured) I even defined it in my /etc/profile and also in my virtual environment's "activate". But it's not reading it. 回答1: Now in Celery 4.1 you can solve that problem by that code(the easiest way): import celeryconfig from celery import Celery app = Celery() app.config_from_object(celeryconfig) For

Assign different tasks to different celery workers

亡梦爱人 提交于 2019-12-03 03:43:32
I am running my server using this command: celery worker -Q q1,q2 -c 2 which shows that my server will handle all the tasks on queues q1 and q2 , and I have 2 workers running. My server should support 2 different tasks: @celery.task(name='test1') def test1(): print "test1" time.sleep(3) @celery.task(name='test2') def test2(): print "test2" If I send my test1 tasks to queue q1 and test2 to q2 , both workers will run both tasks. So the result will be: test1 test2 test1 test2 ... Now what I need is one of my workers handle test1 and the other one handles test2 . One solution is to run two celery

Receiving events from celery task

妖精的绣舞 提交于 2019-12-03 03:40:34
I have a long running celery task which iterates over an array of items and performs some actions. The task should somehow report back which item is it currently processing so end-user is aware of the task's progress. At the moment my django app and celery seat together on one server, so I am able to use Django's models to report the status, but I am planning to add more workers which are away from Django, so they can't reach DB. Right now I see few solutions: Store intermediate results manually using some storage, like redis or mongodb making then available over the network. This worries me a

Django Celery ConnectionError: Too many heartbeats missed

十年热恋 提交于 2019-12-03 03:09:57
Question How can I solve the ConnectionError: Too many heartbeats missed from Celery? Example Error [2013-02-11 15:15:38,513: ERROR/MainProcess] Error in timer: ConnectionError('Too many heartbeats missed', None, None, None, '') Traceback (most recent call last): File "/app/.heroku/python/lib/python2.7/site-packages/celery/utils/timer2.py", line 97, in apply_entry entry() File "/app/.heroku/python/lib/python2.7/site-packages/celery/utils/timer2.py", line 51, in __call__ return self.fun(*self.args, **self.kwargs) File "/app/.heroku/python/lib/python2.7/site-packages/celery/utils/timer2.py",

SQLAlchemy session issues with celery

谁说我不能喝 提交于 2019-12-03 03:09:20
I have scheduled a few recurring tasks with celery beat for our web app The app itself is build using pyramid web framework. Using the zopetransaction extension to manage session In celery, I am using the app as a library. I am redefining session in models with a function. It works well but once in a while, it raises InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction I am not sure what is wrong and why it issues these warnings. Sample code: in tasks.py def initialize_async_session(): import sqlalchemy from webapp.models import Base,

Retrieve task result by id in Celery

匿名 (未验证) 提交于 2019-12-03 03:08:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am trying to retreive the result of a task which has completed. This works from proj.tasks import add res = add.delay(3,4) res.get() 7 res.status 'SUCCESS' res.id '0d4b36e3-a503-45e4-9125-cfec0a7dca30' But I want to run this from another application. So I rerun python shell and try: from proj.tasks import add res = add.AsyncResult('0d4b36e3-a503-45e4-9125-cfec0a7dca30') res.status 'PENDING' res.get() # Error How can I retrieve the result? 回答1: It works using AsyncResult . (see this answer ) So first create the task: from cel.tasks import

Python Flask with celery out of application context

匿名 (未验证) 提交于 2019-12-03 03:04:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am building a website using python Flask. Everything is going good and now I am trying to implement celery. That was going good as well until I tried to send an email using flask-mail from celery. Now I am getting an "working outside of application context" error. full traceback is Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task R = retval = fun(*args, **kwargs) File "/usr/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__ return self

supervisor - how to run multiple commands

匿名 (未验证) 提交于 2019-12-03 03:04:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm managing a Celery worker that processes queue via Supervisor. Here's my /etc/supervisor/celery.conf: [program:celery] command = /var/worker/venv/bin/celery worker -A a_report_tasks -Q a_report_process --loglevel=INFO directory=/var/worker user=nobody numprocs=1 autostart=true autorestart=true startsecs=10 stopwaitsecs = 60 stdout_logfile=/var/log/celery/worker.log stderr_logfile=/var/log/celery/worker.log killasgroup=true priority=998 How do I add this second command to run? /var/worker/venv/bin/celery worker -A b_report_tasks -Q b

Celery in daemon mode

匿名 (未验证) 提交于 2019-12-03 03:03:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I use GNU screen for running Celery in console mode, but it's a hack I don't want to use on prodution server. I want to know how to daemonize Celery. I have virtualenv with celery set up. I want to run %venv%/bin/celeryd in daemon mode. I tried ./celeryd start and got: Unrecognized command line arguments: start What else I should try to run it in daemon mode? 回答1: Try this /etc/init.d/celeryd script. #!/bin/sh -e ### BEGIN INIT INFO # Provides: celeryd # Required-Start: $network $local_fs $remote_fs # Required-Stop: $network $local_fs

Unable to start Airflow worker/flower and need clarification on Airflow architecture to confirm that the installation is correct

匿名 (未验证) 提交于 2019-12-03 02:59:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Running a worker on a different machine results in errors specified below. I have followed the configuration instructions and have sync the dags folder. I would also like to confirm that RabbitMQ and PostgreSQL only needs to be installed on the Airflow core machine and does not need to be installed on the workers (the workers only connect to the core). The specification of the setup is detailed below: Airflow core/server computer Has the following installed: Python 2.7 with airflow (AIRFLOW_HOME = ~/airflow) celery psycogp2 RabbitMQ