celery

python celery - get() is delayed

ⅰ亾dé卋堺 提交于 2020-01-07 06:35:21
问题 I am running the following simple example. Submit 20 jobs that take 2 seconds each using a single worker: celery -A celery_test worker --concurrency 10 -l INFO This should take 2 * 2 = 4 seconds. This is true for the worker to process the data. However, getting the data adds an additional delay of 6 seconds. Any ideas how to get rid of this delay? For scripts and outputs see below: celery_call.py: from celery_test import add import time results = [] for i in range(20): results.append(add

Huge delay when using Celery + Redis

匆匆过客 提交于 2020-01-07 04:37:06
问题 I'm testing Django + Celery, hello world examples. With RabbitMQ celery works fine, but when I switched to Redis broker/result I get following: %timeit add.delay(1,2).get() 1 loops, best of 3: 503 ms per loop settings.py CELERY_RESULT_BACKEND = "redis" BROKER_URL = 'redis://localhost:6379' tasks.py @task() def add(x, y): return x + y Is there any issues in test above? 回答1: I found solution is source code: http://docs.celeryproject.org/en/latest/_modules/celery/result.html#AsyncResult.get

Peer authentication fails with PostgreSQL in celery task

假装没事ソ 提交于 2020-01-07 03:10:31
问题 When a celery task I have set up is executed, the following exception is thrown when it attempts to fetch an object from the database: File "/usr/local/lib/python2.7/dist-packages/psycopg2/__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async) OperationalError: FATAL: Peer authentication failed for user "chris" This only occurs when a task is run by Celery. How can I fix this please? My "host" setting is an empty string "" in settings.py.

Django Celery Elastic Beanstalk supervisord no such process error

为君一笑 提交于 2020-01-06 18:11:21
问题 My celery_config.txt in file script in .ebextensions #!/usr/bin/env bash # Get django environment variables celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'` celeryenv=${celeryenv%?} # Create celery configuraiton script celeryconf="[program:celeryd-worker] ; Set full path to celery program if using virtualenv command=/opt/python/run/venv/bin/celery worker -A wellfie --loglevel=INFO

Celery

女生的网名这么多〃 提交于 2020-01-06 15:34:57
Celery 1.Celery是什么 Celery 一个懂得 异步任务 , 定时任务 , 周期任务 的芹菜 Celery 是基于Python实现的模块, 用于执行异步定时周期任务的 其结构的组成是由 1.用户任务 app 2.管道 broker 用于存储任务 官方推荐 redis rabbitMQ / backend 用于存储任务执行结果的 3.员工 worker 多任务异步任务: app---task---调度器(broker)---worker ---调度器(backend)---task---app 定时任务:task---多少时间执行该任务>调度器(broker)---多少时间执行该任务>worker等待 ---调度器(backend)---task 周期任务: 2.Celery的简单实例 1 from celery import Celery 2 import time 3 4 #创建一个Celery实例,这就是我们用户的应用app 5 my_task = Celery("tasks", broker="redis://127.0.0.1:6379", backend="redis://127.0.0.1:6379") 6 7 # 为应用创建任务,func1 8 @my_task.task 9 def func1(x, y): 10 time.sleep(15) 11

zoneinfo data corrupt, how do I compile new data?

ぃ、小莉子 提交于 2020-01-06 08:14:15
问题 Basically the same thing happened again as when I asked this question. However this time I cannot get it right again. I tried the answer of Burhan Khalid again and I get the same errors again. I also tried copy pasting the zoneinfo folder from a backup again, but this time it did not fix my errors. Version of Django = 1.4.5 Version of Celery = 3.0.8 Version of Django-Celery = 3.0.6 Version of pytz = 2013b (same as the files I am downloading) OS = Mac Mountain Lion Attempt 1: Clear the

Celery systemd proper configuration for two applications to use the same daemon service

[亡魂溺海] 提交于 2020-01-06 04:31:06
问题 With some insights from my prev question, I reconfigured my celery to run as a daemon with systemd, but I am still facing issues configuring it for multiple apps. Celery documentation (which shows how to daemonize for a single app) is insufficient for me to understand about multiple apps. And I am less experienced with daemonizing anything. So far, this is my configuration for the service to enable both the applications to use it. /etc/conf.d/celery CELERYD_NODES="w1 w2 w3" # Absolute or

No module named celery when installing ckanext-archiver

∥☆過路亽.° 提交于 2020-01-06 01:35:29
问题 I'm using CKAN as my open data portal and am trying to install the Archiver Extension by following the instructions at https://github.com/ckan/ckanext-archiver. However I am faced with this error which I could not solve after enabling the archiver in my ckan config file. Traceback (most recent call last): File "/usr/lib/ckan/default/bin/paster", line 9, in <module> load_entry_point('PasteScript==1.7.5', 'console_scripts', 'paster')() File "/usr/lib/ckan/default/local/lib/python2.7/site

Tensorflow/Keras with django not working correctly with celery

↘锁芯ラ 提交于 2020-01-05 08:25:10
问题 We are building a script for face recognition, mainly with tensorflow for basic recognition functions, from videos. When we try the soft directly with a python test-reco.py (which take a video path as parameter) it works perfectly. Now we are trying to integrate it through our website, within a celery task. Here is the main code: def extract_labels(self, path_to_video): if not os.path.exists(path_to_video): print("NO VIDEO!") return None video = VideoFileClip(path_to_video) n_frames = int

Python redis and celery too many clients, different errors on each execution | Tasks connect to MySQL using pymsql

放肆的年华 提交于 2020-01-05 07:52:26
问题 I am currently working on an app, which has to process several long running tasks. I am using python 3 , flask , celery , redis . I have a working solution on localhost, but on heroku there are many errors and every execution of the app triggers everytime a different set of errors. I know it cant be random so I am trying to figure out where to start looking. I have a feeling something must be wrong with redis and I am trying to understand what clients are and where they come from, but I am