celery

Run while loop concurrently with Flask server

血红的双手。 提交于 2019-12-03 16:05:59
I'm updating some LEDs using python. I've been doing this like so: from LEDs import * myLEDs = LEDs() done = False while not done: myLEDs.iterate() I wanted to use Flask to act as a bridge between some nice-looking ReactJS front-end I can run in my browser (to change the current pattern, etc) and the LED-controlling code in Python. I have Flask working fine, can handle HTTP requests, etc. I'm wondering how I can set myLEDs.iterate() to continuously run (or run on a rapid schedule) concurrently with my flask app, while still being able to communicate with one another, like so: myLEDs = LEDs()

Python Celery versus Threading Library for running async requests [closed]

℡╲_俬逩灬. 提交于 2019-12-03 15:43:51
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 5 years ago . I am running a python method that parses a lot of data. Since it is time intensive, I would like to run it asynchronously on a separate thread so the user can still access the website/UI. Do threads using the "from threading import thread" module terminate if a user exits the

Celery介绍和基本使用

…衆ロ難τιáo~ 提交于 2019-12-03 15:35:09
Celery介绍和基本使用 Celery是一个基于python开发的分布式异步消息任务队列,通过它可以轻松的实现任务的异步处理,如果你的业务场景中需要用到异步任务,就可以考虑使用celery,举几个实例场景中可用的例子: 你想对100台机器执行一条批量命令,可能会花很长时间,但你不想让你的程序等着结果返回,而是给你返回一个任务ID,你过段时间只需要拿着这个任务ID就可以拿到任务执行结果,在任务执行ing进行时,你可以继续做其他的事情。 2.你想做一个定时任务,比如每天检测一下你们所有客户到资料,如果发现今天是客户生日,就给它发个短信祝福. Celery在执行任务时需要通过一个消息中间件来接收和发送消息,以及存储任务结果,一般会使用rabbitMQ or Redis, Celery有以下优点: 简单:一单熟悉了celery的工作流程后,配置和使用还是比较简单的。 高可用:当任务执行失败或者执行过程中发生连接中断,celery会自动尝试重新执行任务 快速:一个单进程的celery每分钟可处理上百万个任务 灵活:几乎celery的各个组件都可以被扩展及自定制 Celery基本工作流程图 https://www.cnblogs.com/alex3714/articles/6351797.html 来源: https://www.cnblogs.com/venvive/p/11802483

Celery Get List Of Registered Tasks

僤鯓⒐⒋嵵緔 提交于 2019-12-03 15:22:20
问题 Is there a way to get a list of registered tasks? I tried: celery_app.tasks.keys() Which only returns built in Celery tasks like celery.chord, celery.chain etc. 回答1: from celery.task.control import inspect i = inspect() i.registered_tasks() This will give a dictionary of all workers & related registered tasks. from itertools import chain set(chain.from_iterable( i.registered_tasks().values() )) In case if you have multiple workers running same tasks or if you just need a set of all registered

How to make a celery task fail from within the task?

Deadly 提交于 2019-12-03 14:33:38
问题 Under some conditions, I want to make a celery task fail from within that task. I tried the following: from celery.task import task from celery import states @task() def run_simulation(): if some_condition: run_simulation.update_state(state=states.FAILURE) return False However, the task still reports to have succeeded: Task sim.tasks.run_simulation[9235e3a7-c6d2-4219-bbc7-acf65c816e65] succeeded in 1.17847704887s: False It seems that the state can only be modified while the task is running

How to call a celery task delay function from non-python languages such as Java?

大城市里の小女人 提交于 2019-12-03 14:31:58
I have setup celery + rabbitmq for on a 3 cluster machine. I have also created a task which generates a regular expression based on data from the file and uses the information to parse text. from celery import Celery celery = Celery('tasks', broker='amqp://localhost//') import re @celery.task def add(x, y): return x + y def get_regular_expression(): with open("text") as fp: data = fp.readlines() str_re = "|".join([x.split()[2] for x in data ]) return str_re @celery.task def analyse_json(tw): str_re = get_regular_expression() re.match(str_re,tw.text) I can make the call to this task very easily

Celery Beat: Limit to single task instance at a time

南笙酒味 提交于 2019-12-03 14:16:26
I have celery beat and celery (four workers) to do some processing steps in bulk. One of those tasks is roughly along the lines of, "for each X that hasn't had a Y created, create a Y." The task is run periodically at a semi-rapid rate (10sec). The task completes very quickly. There are other tasks going on as well. I've run into the issue multiple times in which the beat tasks apparently become backlogged, and so the same task (from different beat times) are executed simultaneously, causing incorrectly duplicated work. It also appears that the tasks are executed out-of-order. Is it possible

celery trying shutdown worker by raising SystemExit in task_postrun signal but always hangs and the main process never exits

亡梦爱人 提交于 2019-12-03 14:16:07
I'm trying to shutdown the main celery process by raisin SystemExit() in the task_postrun signal. The signal gets fired just fine, and the exception gets raised, but the worker never completely exits and just hangs there. HOW DO I MAKE THIS WORK? Am I forgetting some setting somewhere? Below is the code that I'm using for the worker (worker.py): from celery import Celery from celery import signals app = Celery('tasks', set_as_current = True, broker='amqp://guest@localhost//', backend="mongodb://localhost//", ) app.config_from_object({ "CELERYD_MAX_TASKS_PER_CHILD": 1, "CELERYD_POOL": "solo",

Django Celery Database for Models on Producer and Worker

会有一股神秘感。 提交于 2019-12-03 14:15:15
问题 I want to develop an application which uses Django as Fronted and Celery to do background stuff. Now, sometimes Celery workers on different machines need database access to my django frontend machine (two different servers). They need to know some realtime stuff and to run the django-app with python manage.py celeryd they need access to a database with all models available. Do I have to access my MySQL database through direct connection? Thus I have to allow user "my-django-app" access not

Framing Errors in Celery 3.0.1

社会主义新天地 提交于 2019-12-03 14:09:58
I recently upgraded to Celery 3.0.1 from 2.3.0 and all the tasks run fine. Unfortunately. I'm getting a "Framing Error" exception pretty frequently. I'm also running supervisor to restart the threads but since these are never really killed supervisor has no way of knowing that celery needs to be restarted. Has anyone seen this before? 2012-07-13 18:53:59,004: ERROR/MainProcess] Unrecoverable error: Exception('Framing Error, received 0x00 while expecting 0xce',) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/celery/worker/__init__.py", line 350, in start