celery

Celery. Decrease number of processes

空扰寡人 提交于 2019-11-27 22:12:02
Is there any way around to limit number of workers in celery ? I have small server and celery always creates 10 processes on 1 core processor. I want to limit this number to 3 processes. I tried setting concurrency to 1 and max_tasks_per_child to 1 in my settings.py file and ran 3 tasks at the same time. It just spawns 1 process as a User and the other 2 as celery. It should should just run 1 process and then wait for it to finish before running the other one. I am using django celery. EDIT { I was assigning concurrency by writing CELERYD_CONCURRENCY = 1 in settings.py file. But when I looked

celery task and customize decorator

别说谁变了你拦得住时间么 提交于 2019-11-27 21:26:08
I'm working on a project using django and celery(django-celery). Our team decided to wrap all data access code within (app-name)/manager.py (NOT wrap into Managers like the django way), and let code in (app-name)/task.py only dealing with assemble and perform tasks with celery(so we don't have django ORM dependency in this layer). In my manager.py , I have something like this: def get_tag(tag_name): ctype = ContentType.objects.get_for_model(Photo) try: tag = Tag.objects.get(name=tag_name) except ObjectDoesNotExist: return Tag.objects.none() return tag def get_tagged_photos(tag): ctype =

【Python celery】

最后都变了- 提交于 2019-11-27 20:23:27
目录 原文: http://blog.gqylpy.com/gqy/380 "安装: pip install celery celery 是基于 Python 实现的模块,用于执行异步定时周期任务。 celery 组成结构: 用户任务 app : 用于生成任务 管道 broker 与 backend :前者用于存放任务,后者用于存放任务执行结果 员工 worker :负责执行任务 @(Python celery) 简单示例 员工文件(workers.py): import time from celery import Celery # 创建一个Celery实例,这个就是我们用户的应用app my_task = Celery( 'tasks', broker='redis://127.0.0.1:6380', # 指定存放任务的地方,这个指定为redis backend='redis://127.0.0.1:6380', # 指定存放任务执行结果的地方 ) # 为应用创建任务 @my_task.task def fn1(x, y): time.sleep(10) return x + y """ 执行命令: Linux:celery worker -A workers -l INFO Windows:celery worker -A workers -l INFO -P

【Python celery】

随声附和 提交于 2019-11-27 19:52:21
目录 原文: http://blog.gqylpy.com/gqy/380 "安装: pip install celery celery 是基于 Python 实现的模块,用于执行异步定时周期任务。 celery 组成结构: 用户任务 app : 用于生成任务 管道 broker 与 backend :前者用于存放任务,后者用于存放任务执行结果 员工 worker :负责执行任务 @(Python celery) 简单示例 员工文件(workers.py): import time from celery import Celery # 创建一个Celery实例,这个就是我们用户的应用app my_task = Celery( 'tasks', broker='redis://127.0.0.1:6380', # 指定存放任务的地方,这个指定为redis backend='redis://127.0.0.1:6380', # 指定存放任务执行结果的地方 ) # 为应用创建任务 @my_task.task def fn1(x, y): time.sleep(10) return x + y """ 执行命令: Linux:celery worker -A workers -l INFO Windows:celery worker -A workers -l INFO -P

In celery 3.1, making django periodic task

可紊 提交于 2019-11-27 19:51:11
问题 Things changed too much in Django, so I can't use 3.1. I need some help. I read about make a task in django, and read Periodic Tasks document. But I don't know how make periodic tasks in django. I think this becuase of my low level English.. In the older version of Celery, I imported djcelery & crontab and set CELERYBEAT_SCHEDULE in settings.py , and excuted by manage.py . But it seems that I cannot execute celery deamon by that way anymore. Than where I should put CELERYBEAT_SCHEDULE? In

How to chain a Celery task that returns a list into a group?

岁酱吖の 提交于 2019-11-27 18:35:04
I want to create a group from a list returned by a Celery task, so that for each item in the task result set, one task will be added to the group. Here's a simple code example to explain the use case. The ??? should be the result from the previous task. @celery.task def get_list(amount): # In reality, fetch a list of items from a db return [i for i in range(amount)] @celery.task def process_item(item): #do stuff pass process_list = (get_list.s(10) | group(process_item.s(i) for i in ???)) I'm probably not approaching this correctly, but I'm pretty sure it's not safe to call tasks from within

How can I recover unacknowledged AMQP messages from other channels than my connection's own?

白昼怎懂夜的黑 提交于 2019-11-27 17:58:32
It seems the longer I keep my rabbitmq server running, the more trouble I have with unacknowledged messages. I would love to requeue them. In fact there seems to be an amqp command to do this, but it only applies to the channel that your connection is using. I built a little pika script to at least try it out, but I am either missing something or it cannot be done this way (how about with rabbitmqctl?) import pika credentials = pika.PlainCredentials('***', '***') parameters = pika.ConnectionParameters(host='localhost',port=5672,\ credentials=credentials, virtual_host='***') def handle_delivery

Django Celery Logging Best Practice

不打扰是莪最后的温柔 提交于 2019-11-27 17:47:42
I'm trying to get Celery logging working with Django . I have logging set-up in settings.py to go to console (that works fine as I'm hosting on Heroku ). At the top of each module, I have: import logging logger = logging.getLogger(__name__) And in my tasks.py, I have: from celery.utils.log import get_task_logger logger = get_task_logger(__name__) That works fine for logging calls from a task and I get output like this: 2012-11-13T18:05:38+00:00 app[worker.1]: [2012-11-13 18:05:38,527: INFO/PoolWorker-2] Syc feed is starting But if that task then calls a method in another module, e.g. a

Issues with Celery configuration on AWS Elastic Beanstalk - “No config updates to processes”

泄露秘密 提交于 2019-11-27 17:03:55
问题 I've a Django 2 application deployed on AWS Elastic Beanstalk and I'm trying to configure Celery in order to exec async tasks on the same machine. My files: 02_packages.config files: "/usr/local/share/pycurl-7.43.0.tar.gz" : mode: "000644" owner: root group: root source: https://pypi.python.org/packages/source/p/pycurl/pycurl-7.43.0.tar.gz packages: yum: python34-devel: [] libcurl-devel: [] commands: 01_download_pip3: # run this before PIP installs requirements as it needs to be compiled with

Distributed task queues (Ex. Celery) vs crontab scripts

两盒软妹~` 提交于 2019-11-27 16:57:58
I'm having trouble understanding the purpose of 'distributed task queues'. For example, python's celery library . I know that in celery, the python framework, you can set timed windows for functions to get executed. However, that can also be easily done in a linux crontab directed at a python script. And as far as I know, and shown from my own django-celery webapps, celery consumes much more RAM memory than just setting up a raw crontab. Few hundred MB difference for a relatively small app. Can someone please help me with this distinction? Perhaps a high level explanation of how task queues /