celery

How to stop celery worker process

馋奶兔 提交于 2019-12-02 15:13:23
I have a Django project on an Ubuntu EC2 node, which I have been using to set up an asynchronous using Celery . I am following this along with the docs. I've been able to get a basic task working at the command line, using: (env1)ubuntu@ip-172-31-22-65:~/projects/tp$ celery --app=myproject.celery:app worker --loglevel=INFO To start a worker. I have since made some changes to the Python, but realized that I need to restart a worker. From the command line, I've tried: ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9 But I can see that the worker is still running. How can I kill

redis设置远端访问

走远了吗. 提交于 2019-12-02 14:47:19
redis设置远端访问 下载安装包并解压 编译安装 配置远端访问 开启服务 成功验证 附加:验证celery是否成功 下载安装包并解压 wget http : // download . redis . io / releases / redis - 4.0 .1 . tar . gz tar - zxvf redis - 4.0 .1 . tar . gz 编译安装 cd redis - 4.0 .1 make make install PREFIX = / usr / local / redis cp - r redis . conf / usr / local / redis / bin / 配置远端访问 cd / usr / local / redis / bin / vim redis . conf #注释掉bind 127.0.0.1 #daemonize no 修改为 daemonize yes #protected-mode yes 修改为 protected-mode no 开启服务 cd / usr / local / redis / bin / . / redis - server redis . conf 成功验证 telnet 远端ip 6379 附加:验证celery是否成功 celery - A tasks worker - l info 来源:

消息队列中间件Celery、RabbitMQ

China☆狼群 提交于 2019-12-02 14:29:43
消息队列中间件(简称消息中间件)是指利用高效可靠的消息传递机制进行与平台无关的数据交流,并基于数据通信来进行分布式系统的集成。通过提供消息传递和消息排队模型,它可以在分布式环境下提供应用解耦、弹性伸缩、冗余存储、流量削峰、异步通信、数据同步等等功能,其作为分布式系统架构中的一个重要组件,有着举足轻重的地位。 目前开源的消息中间件可谓是琳琅满目,能让大家耳熟能详的就有很多,比如ActiveMQ、RabbitMQ、Kafka、RocketMQ(阿里)、ZeroMQ等。不管选择其中的哪一款,都会有用的不趁手的地方,毕竟不是为你量身定制的。有些大厂在长期的使用过程中积累了一定的经验,其消息队列的使用场景也相对稳定固化,或者目前市面上的消息中间件无法满足自身需求,并且也具备足够的精力和人力而选择自研来为自己量身打造一款消息中间件。但是绝大多数公司还是不会选择重复造轮子,那么选择一款合适自己的消息中间件显得尤为重要。就算是前者,那么在自研出稳定且可靠的相关产品之前还是会经历这样一个选型过程。 消息队列是消息在传输的过程中保存消息的容器。 celery:流程图: <ignore_js_op> RabbitMQ AMQP,即Advanced Message Queuing Protocol,高级消息队列协议,是应用层协议的一个开放标准,为面向消息的 中间件 设计。消息中间件主要用于组件之间的解耦

Why use Celery instead of RabbitMQ?

本小妞迷上赌 提交于 2019-12-02 14:21:26
From my understanding, Celery is a distributed task queue, which means the only thing that it should do is dispatching tasks/jobs to others servers and get the result back. RabbitMQ is a message queue, and nothing more. However, a worker could just listen to the MQ and execute the task when a message is received. This achieves exactly what Celery offers, so why need Celery at all? You are right, you don't need Celery at all. When you are designing a distributed system there are a lot of options and there is no right way to do things that fits all situations. Many people find that it is more

Celery - run different workers on one server

折月煮酒 提交于 2019-12-02 14:03:30
I have 2 kind of tasks : Type1 - A few of high priority small tasks. Type2 - Lot of heavy tasks with lower priority. Initially i had simple configuration with default routing, no routing keys were used. It was not sufficient - sometimes all workers were busy with Type2 Tasks, so Task1 were delayed. I've added routing keys: CELERY_DEFAULT_QUEUE = "default" CELERY_QUEUES = { "default": { "binding_key": "task.#", }, "highs": { "binding_key": "starter.#", }, } CELERY_DEFAULT_EXCHANGE = "tasks" CELERY_DEFAULT_EXCHANGE_TYPE = "topic" CELERY_DEFAULT_ROUTING_KEY = "task.default" CELERY_ROUTES = {

Pros and cons to use Celery vs. RQ [closed]

☆樱花仙子☆ 提交于 2019-12-02 13:55:35
Currently I'm working on python project that requires implement some background jobs (mostly for email sending and heavily database updates). I use Redis for task broker. So in this point I have two candidates: Celery and RQ . I had some experience with these job queues, but I want to ask you guys to share you experience of using this tools. So. What pros and cons to use Celery vs. RQ. Any examples of projects/task suitable to use Celery vs. RQ. Celery looks pretty complicated but it's full featured solution. Actually I don't think that I need all these features. From other side RQ is very

ds

馋奶兔 提交于 2019-12-02 12:21:55
# Copyright 2018-present Lenovo # Confidential and Proprietary from django.core.management.base import BaseCommand __all__ = ['Command'] class Command(BaseCommand): help = 'launch celery beat.' def add_arguments(self, parser): parser.add_argument( '--log-level', default='INFO', choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', 'FATAL'], help='Logging level' ) parser.add_argument( '--conf-path', default='/var/run/lico/core', help='Store pid file and schedule db' ) def handle(self, *args, **options): from antilles.common.main import app celery_conf_path = options['conf_path'] from os

Queues with random GUID being generated in RabbitMQ server

荒凉一梦 提交于 2019-12-02 11:43:16
问题 Queues with a random GUID are being generated comming from exchange 'celeryresults'. This happened when I fired a task from the shell, using delay method, but I forgot to enter parameters of my original function in the arguments list of delay. Error displayed in terminal where I run the celery worker: [2015-02-20 18:42:48,547: ERROR/MainProcess] Task customers.tasks.sendmail_task[1a4daf49-81bf-4122-8dea-2ee76c2a2ff8] raised unexpected: TypeError('sendmail_task() takes exactly 4 arguments (0

Celery + Django on Elastic Beanstalk causing error: <class 'xmlrpclib.Fault'>, <Fault 6: 'SHUTDOWN_STATE'>

只愿长相守 提交于 2019-12-02 11:36:31
I've a Django 2 application deployed on AWS Elastic Beanstalk. I configured Celery in order to exec async tasks on the same machine. Since I added Celery, every time I redeploy my application eb deploy myapp-env I get the following error: ERROR: [Instance: i-0bfa590abfb9c4878] Command failed on instance. Return code: 2 Output: (TRUNCATED)... ERROR: already shutting down error: <class 'xmlrpclib.Fault'>, <Fault 6: 'SHUTDOWN_STATE'>: file: /usr/lib64/python2.7/xmlrpclib.py line: 800 error: <class 'xmlrpclib.Fault'>, <Fault 6: 'SHUTDOWN_STATE'>: file: /usr/lib64/python2.7/xmlrpclib.py line: 800.

Django related objects are missing from celery task (race condition?)

隐身守侯 提交于 2019-12-02 11:22:35
问题 Strange behavior, that I don't know how to explain. I've got a model, Track , with some related points . I call a celery task to performs some calculations with points, and they seem to be perfectly reachable in the method itself, but unavailable in celery task. @shared_task def my_task(track): print 'in the task', track.id, track.points.all().count() def some_method(): t = Track() t.save() t = fill_with_points(t) # creating points, attaching them to a Track t.save() print 'before the task',