celery

Django Celery Scheduling a manage.py command

雨燕双飞 提交于 2019-12-07 16:18:35
问题 I need to update the solr index on a schedule with the command: (env)$ ./manage.py update_index I've looked through the Celery docs and found info on scheduling, but haven't been able to find a way to run a django management command on a schedule and inside a virtualenv. Would this be better run on a normal cron? And if so how would I run it inside the virtualenv? Anyone have experience with this? Thanks for the help! 回答1: To run your command periodically from a cron job, just wrap the

Celery + SQLAlchemy : DatabaseError: (DatabaseError) SSL error: decryption failed or bad record mac

左心房为你撑大大i 提交于 2019-12-07 15:52:11
问题 Error in the title triggers sometimes when using celery with more than one worker on a postgresql db with SSL turned on. I'm in a flask + SQLAlchemy configuration 回答1: As mentionned here : https://github.com/celery/celery/issues/634 the solution in the django-celery plugin was to simply dispose all db connection at the start of the task. In flask + SQLAlchemy configuration, doing this worked for me : from celery.signals import task_prerun @task_prerun.connect def on_task_init(*args, **kwargs)

Scheduling celery tasks with large ETA

旧巷老猫 提交于 2019-12-07 14:58:53
问题 I am currently experimenting with future tasks in celery using the ETA feature and a redis broker. One of the known issues with using a redis broker has to do with the visibility timeout: If a task isn’t acknowledged within the Visibility Timeout the task will be redelivered to another worker and executed. This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop. Some

Running supervisord from the host, celery from a virtualenv (Django app)

北城以北 提交于 2019-12-07 14:32:34
问题 I'm trying to use celery and redis queue to perform a task for my Django app. Supervisord is installed on the host via apt-get , whereas celery resides in a specific virtualenv on my system, installed via `pip. As a result, I can't seem to get the celery command to run via supervisord. If I run it from inside the virtualenv, it works fine, outside of it, it doesn't. How do I get it to run under my current set up? Is the solution simply to install celery via apt-get, instead of inside the

Python multiprocessing job to Celery task but AttributeError

最后都变了- 提交于 2019-12-07 12:30:15
问题 I made a multiprocessed function like this, import multiprocessing import pandas as pd import numpy as np def _apply_df(args): df, func, kwargs = args return df.apply(func, **kwargs) def apply_by_multiprocessing(df, func, **kwargs): workers = kwargs.pop('workers') pool = multiprocessing.Pool(processes=workers) result = pool.map(_apply_df, [(d, func, kwargs) for d in np.array_split(df, workers)]) pool.close() return pd.concat(list(result)) def square(x): return x**x if __name__ == '__main__':

Celery / RabbitMQ - Find out the No Acks - Unacknowledged messages

本秂侑毒 提交于 2019-12-07 12:24:30
问题 I am trying to figure out how to get information on unacknowledged messages. Where are these stored? In playing with celery inspect it seems that once a message gets acknowledged it processes through and you can follow the state. Assuming you have a results backend then you can see the results of it. But from the time you apply delay until it get's acknowledged it's in a black hole. Where are noAcks stored? How do I find out how "deep" is the noAcks list? In other words how many are there and

Django Celery Periodic Tasks Run But RabbitMQ Queues Aren't Consumed

社会主义新天地 提交于 2019-12-07 12:23:06
问题 Question After running tasks via celery's periodic task scheduler, beat, why do I have so many unconsumed queues remaining in RabbitMQ? Setup Django web app running on Heroku Tasks scheduled via celery beat Tasks run via celery worker Message broker is RabbitMQ from ClouldAMQP Procfile web: gunicorn --workers=2 --worker-class=gevent --bind=0.0.0.0:$PORT project_name.wsgi:application scheduler: python manage.py celery worker --loglevel=ERROR -B -E --maxtasksperchild=1000 worker: python manage

Sentry logging in Django/Celery stopped working

核能气质少年 提交于 2019-12-07 10:30:01
问题 I have no idea whats wrong. So far logging worked fine (and I was relying on that) but it seems to have stopped. I wrote a little test function (which does not work either): core.tasks.py import logging from celery.utils.log import get_task_logger logger = get_task_logger(__name__) logger.setLevel(logging.DEBUG) @app.task def log_error(): logger.error('ERROR') settings.py INSTALLED_APPS += ( 'raven.contrib.django.raven_compat', ) LOGGING = { 'version': 1, 'disable_existing_loggers': True,

supervisord always returns exit status 127 at WebFaction

拈花ヽ惹草 提交于 2019-12-07 09:37:35
问题 I keep getting the following errors from supervisord at webFaction when tailing the log: INFO exited: my_app (exit status 127; not expected) INFO gave up: my_app entered FATAL state, too many start retries too quickly Here's my supervisord.conf: [unix_http_server] file=/home/btaylordesign/tmp/supervisord.sock [rpcinterface:supervisor] supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///home/btaylordesign/tmp/supervisord.sock