celery

Django Celery Scrappy ERROR: twisted.internet.error.ReactorNotRestartable

混江龙づ霸主 提交于 2021-02-19 08:06:14
问题 I have next model: Command 'collect' (collect_positions.py) -> Celery task (tasks.py) -> ScrappySpider (MySpider) ... collect_positions.py: from django.core.management.base import BaseCommand from tracker.models import Keyword from tracker.tasks import positions class Command(BaseCommand): help = 'collect_positions' def handle(self, *args, **options): def chunks(l, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i:i + n] chunk_size = 1 keywords =

Celery - Obtain the Task ID in task_success signal?

*爱你&永不变心* 提交于 2021-02-19 04:33:25
问题 I have an application that implements the task_success signal like this: @signals.task_success.connect def task_success_handler(sender=None,result=None,**kwargs): print("**************************C100") pprint.pprint(sender.name) print("**************************C100") I can obtain the task name. Is there any way to obtain the task_id ? 回答1: As mentioned in documentation, sender is the task object executed. Task object has request attribute which has all the information related to the task.

Python cassandra-driver OperationTimeOut on every query in Celery task

浪子不回头ぞ 提交于 2021-02-18 22:29:12
问题 I have a problem with every insert query (little query) which is executed in celery tasks asynchronously. In sync mode when i do insert all done great, but when it executed in apply_async() i get this: OperationTimedOut('errors=errors=errors={}, last_host=***.***.*.***, last_host=None, last_host=None',) Traceback: Traceback (most recent call last): File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task R = retval = fun(*args, **kwargs)

Python cassandra-driver OperationTimeOut on every query in Celery task

早过忘川 提交于 2021-02-18 22:28:11
问题 I have a problem with every insert query (little query) which is executed in celery tasks asynchronously. In sync mode when i do insert all done great, but when it executed in apply_async() i get this: OperationTimedOut('errors=errors=errors={}, last_host=***.***.*.***, last_host=None, last_host=None',) Traceback: Traceback (most recent call last): File "/var/nfs_www/***/env_v0/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task R = retval = fun(*args, **kwargs)

Celery Gevent Pool - ConcurrentObjectUseError

▼魔方 西西 提交于 2021-02-18 07:06:09
问题 I have a celery worker that is using the gevent pool which does HTTP requests and adds another celery task with page source. I'm using Django, RabbitMQ as a broker, Redis as a celery result backend, Celery 4.1.0. The task has ignore_result=True but I'm getting this error pretty often ConcurrentObjectUseError: This socket is already used by another greenlet: <bound method Waiter.switch of <gevent.hub.Waiter...> I see it is related to the Redis connection. I can't figure out how to solve this.

Celery Gevent Pool - ConcurrentObjectUseError

我是研究僧i 提交于 2021-02-18 07:04:51
问题 I have a celery worker that is using the gevent pool which does HTTP requests and adds another celery task with page source. I'm using Django, RabbitMQ as a broker, Redis as a celery result backend, Celery 4.1.0. The task has ignore_result=True but I'm getting this error pretty often ConcurrentObjectUseError: This socket is already used by another greenlet: <bound method Waiter.switch of <gevent.hub.Waiter...> I see it is related to the Redis connection. I can't figure out how to solve this.

Celery Gevent Pool - ConcurrentObjectUseError

女生的网名这么多〃 提交于 2021-02-18 07:04:28
问题 I have a celery worker that is using the gevent pool which does HTTP requests and adds another celery task with page source. I'm using Django, RabbitMQ as a broker, Redis as a celery result backend, Celery 4.1.0. The task has ignore_result=True but I'm getting this error pretty often ConcurrentObjectUseError: This socket is already used by another greenlet: <bound method Waiter.switch of <gevent.hub.Waiter...> I see it is related to the Redis connection. I can't figure out how to solve this.

How to debug “could not receive data from client: Connection reset by peer”

非 Y 不嫁゛ 提交于 2021-02-18 05:12:25
问题 I'm running a django-celery application on Ubuntu-12.04. When I run a celery task from my web interface, I get the following error, taken form postgresql-9.3 logfile (maximum level of log): 2013-11-12 13:57:01 GMT tss_usr 8113 LOG: could not receive data from client: Connection reset by peer tss_usr is the postgresql user of the django application database and (in this example) 8113 is the pid of the process who killed the connection, I guess. Have you got any idea on why this happens or at

How to debug “could not receive data from client: Connection reset by peer”

你。 提交于 2021-02-18 05:11:59
问题 I'm running a django-celery application on Ubuntu-12.04. When I run a celery task from my web interface, I get the following error, taken form postgresql-9.3 logfile (maximum level of log): 2013-11-12 13:57:01 GMT tss_usr 8113 LOG: could not receive data from client: Connection reset by peer tss_usr is the postgresql user of the django application database and (in this example) 8113 is the pid of the process who killed the connection, I guess. Have you got any idea on why this happens or at

python3绝对路径,相对路径

好久不见. 提交于 2021-02-17 10:12:08
from __future__ import absolute_import的作用: 直观地看就是说”加入绝对引入这个新特性”。说到绝对引入,当然就会想到相对引入。那么什么是相对引入呢?比如说,你的包结构是这样的: pkg/ pkg/init.py pkg/main.py pkg/string.py 如果你在main.py中写import string,那么在Python 2.4或之前, Python会先查找当前目录下有没有string.py, 若找到了,则引入该模块,然后你在main.py中可以直接用string了。如果你是真的想用同目录下的string.py那就好,但是如果你是想用系统自带的标准string.py呢?那其实没有什么好的简洁的方式可以忽略掉同目录的string.py而引入系统自带的标准string.py。这时候你就需要from __future__ import absolute_import了。这样,你就可以用import string来引入系统的标准string.py, 而用from pkg import string来引入当前目录下的string.py了 --------------------- 但是经过实验,去掉跟加上from __future__ import absolute_import并没有什么区别,上面说的好像并没有什么卵用: 目录结构: ##