celery

How can you catch a custom exception from Celery worker, or stop it being prefixed with `celery.backends.base`?

ⅰ亾dé卋堺 提交于 2019-12-09 10:53:33
问题 My Celery task raises a custom exception NonTransientProcessingError , which is then caught by AsyncResult.get() . Tasks.py: class NonTransientProcessingError(Exception): pass @shared_task() def throw_exception(): raise NonTransientProcessingError('Error raised by POC model for test purposes') In the Python console: from my_app.tasks import * r = throw_exception.apply_async() try: r.get() except NonTransientProcessingError as e: print('caught NonTrans in type specific except clause') But my

Multiple Docker containers and Celery

谁说胖子不能爱 提交于 2019-12-09 09:33:47
问题 We have the following structure of the project right now: Web-server that processes incoming requests from the clients. Analytics module that provides some recommendations to the users. We decided to keep these modules completely independent and move them to different docker containers. When a query from a user arrives to the web-server it sends another query to the analytics module to get the recommendations. For recommendations to be consistent we need to do some background calculations

In Celery, how do I run a task, and then have that task run another task, and keep it going?

谁说胖子不能爱 提交于 2019-12-09 07:08:41
问题 #tasks.py from celery.task import Task class Randomer(Task): def run(self, **kwargs): #run Randomer again!!! return random.randrange(0,1000000) >>> from tasks import Randomer >>> r = Randomer() >>> r.delay() Right now, I run the simple task. And it returns a random number. But, how do I make it run another task , inside that task ? 回答1: You can call other_task.delay() from inside Randomer.run ; in this case you may want to set Randomer.ignore_result = True (and other_task.ignore_result , and

Getting Keras (with Theano) to work with Celery

爷,独闯天下 提交于 2019-12-09 06:14:28
问题 I have some keras code which works synchronously to predict a given input, I have even made amendments so it can work with standard multi-threading (using locks in a seperate class from this) however when running via asynchronous celery (even with one worker and one task) I get an error on calling predict on the keras model. @app.task def predict_task(param): """Run task.""" json_file = open('keras_model.json', 'r') loaded_model_json = json_file.read() json_file.close() model = model_from

Test if a celery task is still being processed

纵然是瞬间 提交于 2019-12-09 05:37:46
问题 How can I test if a task (task_id) is still processed in celery? I have the following scenario: Start a task in a Django view Store the BaseAsyncResult in the session Shutdown the celery daemon (hard) so the task is not processed anymore Check if the task is 'dead' Any ideas? Can a lookup all task being processed by celery and check if mine is still there? 回答1: define a field (PickledObjectField) in your model to store the celery task: class YourModel(models.Model): . . celery_task =

Starting Celery: AttributeError: 'module' object has no attribute 'celery'

血红的双手。 提交于 2019-12-09 05:09:02
问题 I try to start a Celery worker server from a command line: celery -A tasks worker --loglevel=info The code in tasks.py: import os os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings" from celery import task @task() def add_photos_task( lad_id ): ... I get the next error: Traceback (most recent call last): File "/usr/local/bin/celery", line 8, in <module> load_entry_point('celery==3.0.12', 'console_scripts', 'celery')() File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg

Run a celery worker in the background

早过忘川 提交于 2019-12-09 04:25:00
问题 I am running a celery worker like this: celery worker --app=portalmq --logfile=/tmp/portalmq.log --loglevel=INFO -E --pidfile=/tmp/portalmq.pid Now I want to run this worker in the background. I have tried several things, including: nohup celery worker --app=portalmq --logfile=/tmp/portal_mq.log --loglevel=INFO -E --pidfile=/tmp/portal_mq.pid >> /tmp/portal_mq.log 2>&1 </dev/null & But it is not working. I have checked the celery documentation, and I found this: Running the worker as a daemon

Flask with create_app, SQLAlchemy and Celery

天涯浪子 提交于 2019-12-09 04:03:43
问题 I'm really struggling to the get the proper setup for Flask, SQLAlchemy and Celery. I have searched extensively and tried different approaches, nothing really seems to work. Either I missed the application context or can't run the workers or there are some other problems. The structure is very general so that I can build a bigger application. I'm using: Flask 0.10.1, SQLAlchemy 1.0, Celery 3.1.13, my current setup is the following: app/__init__.py #Empty app/config.py import os basedir = os

How to make celery retry using the same worker?

匆匆过客 提交于 2019-12-08 20:28:50
问题 I'm just starting out with celery in a Django project, and am kinda stuck at this particular problem: Basically, I need to distribute a long-running task to different workers. The task is actually broken into several steps, each of which takes considerable time to complete. Therefore, if some step fails, I'd like celery to retry this task using the same worker to reuse the results from the completed steps. I understand that celery uses routing to distribute tasks to certain server, but I can

How to unit test code that runs celery tasks?

好久不见. 提交于 2019-12-08 18:20:36
问题 The app I am working on is heavily asynchronous. The web application runs a lot of tasks through celery depending on user actions. The celery tasks themselves are capable of launching further tasks. Code such as the one shown below occurs in our code base quite frequently. def do_sth(): logic(); if condition: function1.apply_async(*args) else: function2.apply_asynch(*args) Now we want to start unit testing any new code that we write and we are not sure how to do this. What we would like to