celery

Django, RabbitMQ, & Celery - why does Celery run old versions of my tasks after I update my Django code in development?

别等时光非礼了梦想. 提交于 2019-12-05 02:08:35
So I have a Django app that occasionally sends a task to Celery for asynchronous execution. I've found that as I work on my code in development, the Django development server knows how to automatically detect when code has changed and then restart the server so I can see my changes. However, the RabbitMQ/Celery section of my app doesn't pick up on these sorts of changes in development. If I change code that will later be run in a Celery task, Celery will still keep running the old version of the code. The only way I can get it to pick up on the change is to: stop the Celery worker stop

Celery目录结构配置

早过忘川 提交于 2019-12-05 02:08:10
celery celery目录结构 myproject/proj ├── __init__.py ├── celery.py # 这个必须是celery.py这个名字 └── tasks.py # 这个不一定是tasks.py, 但是要和include的一致 test.py celery.py from __future__ import absolute_import # 使用绝对导入 from celery import Celery app = Celery("proj", broker="amqp://guest@localhost//", backend="amqp", include=["proj.tasks"] ) app.conf.update( CELERY_ROUTES={ "proj.tasks.add":{"queue":"hipri"},# 把add任务放入hipri队列 # 需要执行时指定队列 add.apply_async((2, 2), queue='hipri') } ) if __name__ == "__main__": app.start() 绝对引用 顺便说一下 from __future__ import absolute_import -->这样以后: 局部的包将不能覆盖全局的包, 本地的包必须使用相对引用了 如上: from

Scrapy randomly crashing with celery in django

不羁的心 提交于 2019-12-05 02:04:04
问题 I am running my Scrapy project within Django on a Ubuntu Server. The problem is, Scrapy randomly crash even if Its only one spider running. Below is a snippet of the TraceBack. As a none expert, I have googled _SIGCHLDWaker Scrappy but couldn't comprehend the solutions found for the snippet of below: --- <exception caught here> --- File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 602, in _doReadOrWrite why = selectable.doWrite()

getting error Received unregistered task of type 'mytasks.add'

懵懂的女人 提交于 2019-12-05 01:47:48
I have written a file mytasks.py from celery import Celery celery = Celery("tasks", broker='redis://localhost:6379/0', backend='redis') @celery.task def add(x,y): return x+y and task.py as follow from mytasks import add add.delay(1,1) I have started redis server and I have started celery server. but when i m running task.py then i am getting the following error: Received unregistered task of type 'mytasks.add'. The message has been ignored and discarded. Did you remember to import the module containing this task? Or maybe you are using relative imports? Please see http://bit.ly/gLye1c for more

Can celery celerybeat use a Database Scheduler without Django?

我们两清 提交于 2019-12-05 01:47:32
问题 I have a small infrastructure plan that does not include Django. But, because of my experience with Django, I really like Celery. All I really need is Redis + Celery to make my project. Instead of using the local filesystem, I'd like to keep everything in Redis. My current architecture uses Redis for everything until it is ready to dump the results to AWS S3. Admittedly I don't have a great reason for using Redis instead of the filesystem. I've just invested so much into architecting this

Django Celery send register email do not work

筅森魡賤 提交于 2019-12-05 01:21:16
I am learning Celery. In my website, I let people register an account. Once they create a new account, it will automatically send an activation email to their user email address. Everything works well but now I want to use Celery to send the email asynchronously. I use RabbitMQ as broker, version 3.1.5 and Celery 3.1.7 (the latest version), as they say this version does not need djcelery. So all I need is just to install Celery. I followed the instruction as celery its website, Configurate my django. proj --proj/celery.py Here is my celery.py : from __future__ import absolute_import import os

Route to worker depending on result in Celery?

荒凉一梦 提交于 2019-12-05 01:19:56
问题 I've been using Storm lately which contains a concept called fields grouping (afaict unrelated to the group() concept in Celery), where messages with a certain key will always be routed to the same worker. Just to get a clearer definition of what I mean, here it is from the Storm wiki. Fields grouping: The stream is partitioned by the fields specified in the grouping. For example, if the stream is grouped by the "user-id" field, tuples with the same "user-id" will always go to the same task,

MongoEngine and dealing with “UserWarning: MongoClient opened before fork. Create MongoClient with connect=False, or create client after forking”

≯℡__Kan透↙ 提交于 2019-12-05 01:02:10
问题 I am using Celery and MongoEngine as part of my Django App with. I am getting this warning, when a celery @shared_task accesses the mongodb database via mongoengine model classes: UserWarning: MongoClient opened before fork. Create MongoClient with connect=False,or create client after forking. See PyMongo's documentation for details: http://api.mongodb.org/python/current/faq.html#using-pymongo-with-multiprocessing It clearly has something to do with multiprocessing and pyMongo that is that

celeryev Queue in RabbitMQ Becomes Very Large

杀马特。学长 韩版系。学妹 提交于 2019-12-05 00:59:08
I am using celery on rabbitmq. I have been sending thousands of messages to the queue and they are being processed successfully and everything is working just fine. However, the number of messages in several rabbitmq queues are growing quite large (hundreds of thousands of items in the queue). The queues are named celeryev.[...] (see screenshot below). Is this appropriate behavior? What is the purpose of these queues and shouldn't they be regularly purged? Is there a way to purge them more regularly, I think they are taking up quite a bit of disk space. You can use the CELERY_EVENT_QUEUE_TTL

How to call a celery task delay function from non-python languages such as Java?

為{幸葍}努か 提交于 2019-12-05 00:35:23
问题 I have setup celery + rabbitmq for on a 3 cluster machine. I have also created a task which generates a regular expression based on data from the file and uses the information to parse text. from celery import Celery celery = Celery('tasks', broker='amqp://localhost//') import re @celery.task def add(x, y): return x + y def get_regular_expression(): with open("text") as fp: data = fp.readlines() str_re = "|".join([x.split()[2] for x in data ]) return str_re @celery.task def analyse_json(tw):