celery

Multi Celery projects with same RabbitMQ broker backend process

杀马特。学长 韩版系。学妹 提交于 2019-12-18 11:34:17
问题 How can I use two different celery project which consumes messages from single RabbitMQ installation . Generally, these scripts work fine if I use different rabbitmq for them. But on production machine, I need to share the same RabbitMQ backend for them. Note: Due to some constraint, I cannot merge new projects in existing, so it will be two different project. 回答1: RabbitMQ has the ability to create virtual message brokers called virtual hosts or vhosts. Each one is essentially a mini

In-Memory broker for celery unit tests

拟墨画扇 提交于 2019-12-18 10:23:08
问题 I have a REST API written in Django, with and endpoint that queues a celery task when posting to it. The response contains the task id which I'd like to use to test that the task is created and get the result. So, I'd like to do something like: def test_async_job(): response = self.client.post("/api/jobs/", some_test_data, format="json") task_id = response.data['task_id'] result = my_task.AsyncResult(task_id).get() self.assertEquals(result, ...) I obviously don't want to have to run a celery

How do I tell celery worker to stop accepting tasks? How can I check if any celery worker tasks are running?

可紊 提交于 2019-12-18 07:12:23
问题 The scenario: System running on a server consisting of a Python/Flask web application and background tasks using Celery Both web application and celery workers are run as upstart jobs (Web app behind Nginx) Deployment to production is done with a script that: Stop the upstart jobs Push code to server Run any db migrations Start the upstart jobs How can I enhance the deployment script so it does the following?: Tell the celery worker to stop accepting tasks Wait until any currently running

celery daemon production local config file without django

狂风中的少年 提交于 2019-12-18 07:04:07
问题 I am newbie to Celery. I create a project as per instruction provided by the celery4.1 docs.Below is my project folder and files: mycelery | test_celery | celery_app.py tasks.py __init__.py 1-celery_app.py from __future__ import absolute_import import os from celery import Celery from kombu import Queue, Exchange from celery.schedules import crontab import datetime app = Celery('test_celery', broker='amqp://jimmy:jimmy123@localhost/jimmy_v_host', backend='rpc://', include=['test_celery.tasks'

Celery &Rabbitmq:WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?- a experiment on the GIT

為{幸葍}努か 提交于 2019-12-18 04:55:13
问题 Recently , I am doing an experiment on a GIT project to understanding the big data processing framework. 1、GIT project:https://github.com/esperdyne/celery-message-processing we have the following components: 1、AMPQ broker( RabbitMQ ): it works as a message buffer, which works as a mail-box to exchange messages for different user! 2、worker: it works as the service-server to provide service for various service client. 3、Queue( "celery" :it works as a multi-processing container which is used to

Measuring Celery task execution time

ぐ巨炮叔叔 提交于 2019-12-18 04:52:30
问题 I have converted a standalone batch job to use celery for dispatching the work to be done. I'm using RabbitMQ. Everything is running on a single machine and no other processes are using the RabbitMQ instance. My script just creates a bunch of tasks which are processed by workers. Is there a simple way to measure the time from the start of my script until all tasks are finished? I know that this a bit complicated by design when using message queues. But I don't want to do it in production,

Find out whether celery task exists

跟風遠走 提交于 2019-12-17 23:05:16
问题 Is it possible to find out whether a task with a certain task id exists? When I try to get the status, I will always get pending. >>> AsyncResult('...').status 'PENDING' I want to know whether a given task id is a real celery task id and not a random string. I want different results depending on whether there is a valid task for a certain id. There may have been a valid task in the past with the same id but the results may have been deleted from the backend. 回答1: Celery does not write a state

How can I set up Celery to call a custom initialization function before running my tasks?

放肆的年华 提交于 2019-12-17 22:35:47
问题 I have a Django project and I'm trying to use Celery to submit tasks for background processing ( http://ask.github.com/celery/introduction.html ). Celery integrates well with Django and I've been able to submit my custom tasks and get back results. The only problem is that I can't find a sane way of performing custom initialization in the daemon process. I need to call an expensive function that loads a lot of memory before I start processing the tasks, and I can't afford to call that

Interoperating with Django/Celery From Java

我与影子孤独终老i 提交于 2019-12-17 22:24:32
问题 Our company has a Python based web site and some Python based worker nodes which communicate via Django/Celery and RabbitMQ. I have a Java based application which needs to submit tasks to the Celery based workers. I can send jobs to RabbitMQ from Java just fine, but the Celery based workers are never picking up the jobs. From looking at the packet captures of both types of job submissions, there are differences, but I cannot fathom how to account for them because a lot of it is binary that I

Connect from one Docker container to another

倖福魔咒の 提交于 2019-12-17 21:56:16
问题 I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/) I have rabbitmq running using the below command... sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server and running the celery via sudo docker run -i -t markellul/celery /bin/bash When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html