celery

simple celery test with Print doesn't go to Terminal

旧城冷巷雨未停 提交于 2019-12-23 21:25:55
问题 EDIT 1: Actually, print statements outputs to the Celery terminal, instead of the terminal where the python program is ran - as @PatrickAllen indicated OP I've recently started to use Celery, but can't even get a simple test going where I print a line to the terminal after a 30 second wait. In my tasks.py : from celery import Celery celery = Celery(__name__, broker='amqp://guest@localhost//', backend='amqp://guest@localhost//') @celery.task def test_message(): print ("schedule task says hello

Place a timeout on calls to an unresponsive Flask route (updated)

对着背影说爱祢 提交于 2019-12-23 19:14:02
问题 I currently have a route in a Flask app that pulls data from an external server and then pushes the results to the front end. The external server is occasionally slow or unresponsive. What's the best way to place a timeout on the route call, so that the front end doesn't hang if the external server is lagging? Or is there a more appropriate way to handle this situation in Flask (not Apache, nginx, etc)? My goal is to timeout a route call, not keep an arbitrary long process alive like this SO

tracking progress of a celery.group task?

和自甴很熟 提交于 2019-12-23 17:08:44
问题 @celery.task def my_task(my_object): do_something_to_my_object(my_object) #in the code somewhere tasks = celery.group([my_task.s(obj) for obj in MyModel.objects.all()]) group_task = tasks.apply_async() Question: Does celery have something to detect the progress of a group task? Can I get the count of how many tasks were there and how many have been processed? 回答1: tinkering around on the shell (ipython's tab auto-completion) I found that group_task (which is a celery.result.ResultSet object)

Update Django Model Field Based On Celery Task Status

天大地大妈咪最大 提交于 2019-12-23 17:06:45
问题 In my model, I have a status field with a default value of 'Processing'. In the Django admin interface, after user clicks 'Save' button, the form inputs are passed to a celery task that just sleeps for 30 seconds. After that 30 seconds, how do I: determine if the celery task was successful? update the model's status field from 'Processing' to the actual status (ex: Completed, Failed? models.py from django.db import models class Scorecard(models.Model): name = models.CharField(max_length=100,

REST API or “direct” database access for remote Celery/Django workers?

霸气de小男生 提交于 2019-12-23 16:05:07
问题 I'm working on a project that will have multiple celery workers on machines in different locations in the US that will communicate over the internet. Am I better off distributing my Django project to each machine and configuring them with the database credentials to my database host, or should I have a "main" Django/database host that presents a REST API for remote celery tasks and workers to hit for the database access? Mostly looking for pros/cons and any factors I haven't thought of. I can

Has anyone succeeded in using celery with pylons

為{幸葍}努か 提交于 2019-12-23 13:25:52
问题 I have a pylons based webapp and i'd love to use celery + rabbitmq for some time taking tasks. I've taken a look at the celery-pylons project but I haven't succeeded in using it. My main problem with celery is: where do i put the celeryconfig.py file or is there any other way to specify the celery options eg. BROKER_HOST and the like, from within a pylons app (In the same way one can put the options in the django settings.py file when using django-celery). Basically, i investigated 2 options:

couldn't start Celery with docker-compose

我与影子孤独终老i 提交于 2019-12-23 12:23:46
问题 I have Flask app with Celery worker and Redis and it's working normally as expected when running on local machine. Then I tried to Dockerize the application. When I trying to build/start the services ( ie, flask app, Celery, and Redis) using sudo docker-compose up all services are running except Celery and showing an error as ImportError: No module named 'my_celery' But, the same code working in local machine without any errors. Can any one suggest the solution? Dockerfile FROM python:3.5

celery: “Substantial drift from”

本小妞迷上赌 提交于 2019-12-23 12:06:55
问题 I have quite a problem with celery on my distribted system. I have couple of machines among different localizations and I've got a lot of warnings in my log files like: "Substantial drift from celery@host [...]" I was able to set date to return the same values (even that the machines are in different countries) but python print(utcoffset()) returns different results on main server and nodes. How to fix that issue? I was unable to find any good solution except that utcoffset() should return

celery task eta is off, using rabbitmq

眉间皱痕 提交于 2019-12-23 11:44:54
问题 I've gotten Celery tasks happening ok, using the default settings in the tutorials and rabbitmq running on ubuntu. All is fine when I schedule a task with no delay, but when I give them an eta, they get scheduled in the future as if my clock is off somewhere. Here is some python code that is asking for tasks: for index, to_address in enumerate(email_addresses): # schedule one email every two seconds delay = index * 2 log.info("MessageUsersFormView.process_action() scheduling task," "email to

Celery workers missing heartbeats and getting substantial drift over Ec2

一世执手 提交于 2019-12-23 11:01:39
问题 I am testing my celery implementation over 3 ec2 machines right now. I am pretty confident in my implementation now, but I am getting problems with the actual worker execution. My test structure is as follows: 1 ec2 machine is designated as the broker, also runs a celery worker 1 ec2 machine is designated as the client (runs the client celery script that enqueues all the tasks using .delay(), also runs a celery worker 1 ec2 machine is purely a worker. All the machines have 1 celery worker