celery

“ResourceClosedError: The transaction is closed” error with celery beat and sqlalchemy + pyramid app

梦想的初衷 提交于 2019-12-10 10:33:26
问题 I have a pyramid app called mainsite . The site works in a pretty asynchronous manner mostly through threads being launched from the view to carry out the backend operations. It connects to mysql with sqlalchemy and uses ZopeTransactionExtension for session management. So far the application has been running great. I need to run periodic jobs on it and it needs to use some of the same asynchronous functions that are being launched from the view. I used apscheduler but ran into issues with

Database is not updated in Celery task with Flask and SQLAlchemy

浪子不回头ぞ 提交于 2019-12-10 10:32:26
问题 I'm writing web application with Flask and SQLAlchemy. My program needs to process some stuff in the background and then mark this stuff as processed in the database. Using standard Flask/Celery example, I have something like this: from flask import Flask from celery import Celery def make_celery(app): celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL']) celery.conf.update(app.config) TaskBase = celery.Task class ContextTask(TaskBase): abstract = True def __call__(self,

python celery: How to append a task to an old chain

隐身守侯 提交于 2019-12-10 10:15:34
问题 I keep in my database, the reference to a chain. from tasks import t1, t2, t3 from celery import chain res = chain(t1.s(123)|t2.s()|t3.s())() res.get() How can I append an other task to this particular chain ? res.append(t2.s()) My goal is to be sure that chains are executed in the same order I specified in my code. And if a task fail in my chain, the following tasks are not executed. For know I'm using super big tasks in a specify queue. 回答1: All the information is contained in the message.

Django使用Channels实现WebSocket--下篇

≯℡__Kan透↙ 提交于 2019-12-10 09:02:42
希望通过对这两篇文章的学习,能够对Channels有更加深入的了解,使用起来得心应手游刃有余 通过上一篇 《Django使用Channels实现WebSocket--上篇》 的学习应该对Channels的各种概念有了清晰的认知,可以顺利的将Channels框架集成到自己的Django项目中实现WebSocket了,本篇文章将以一个Channels+Celery实现web端tailf功能的例子更加深入的介绍Channels 先说下我们要实现的目标:所有登录的用户可以查看tailf日志页面,在页面上能够选择日志文件进行监听,多个页面终端同时监听任何日志都互不影响,页面同时提供终止监听的按钮能够终止前端的输出以及后台对日志文件的读取 最终实现的结果见下图 接着我们来看下具体的实现过程 技术实现 所有代码均基于以下软件版本: python==3.6.3 django==2.2 channels==2.1.7 celery==4.3.0 celery4在windows下支持不完善,所以请 在linux下运行 测试 日志数据定义 我们只希望用户能够查询固定的几个日志文件,就不是用数据库仅借助settings.py文件里写全局变量来实现数据存储 在settings.py里添加一个叫 TAILF 的变量,类型为字典,key标识文件的编号,value标识文件的路径 TAILF = { 1: '

Celery Storing unrecoverable task failures for later resubmission

青春壹個敷衍的年華 提交于 2019-12-10 06:46:37
问题 I'm using the djkombu transport for my local development, but I will probably be using amqp (rabbit) in production. I'd like to be able to iterate over failures of a particular type and resubmit. This would be in the case of something failing on a server or some edge case bug triggered by some new variation in data. So I could be resubmitting jobs up to 12 hours later after some bug is fixed or a third party site is back up. My question is: Is there a way to access old failed jobs via the

Connection is closed when a SQLAlchemy event triggers a Celery task

馋奶兔 提交于 2019-12-10 04:03:14
问题 When one of my unit tests deletes a SQLAlchemy object, the object triggers an after_delete event which triggers a Celery task to delete a file from the drive. The task is CELERY_ALWAYS_EAGER = True when testing. gist to reproduce the issue easily The example has two tests. One triggers the task in the event, the other outside the event. Only the one in the event closes the connection. To quickly reproduce the error you can run: git clone https://gist.github.com/5762792fc1d628843697.git cd

Getting TypeError: 'Module' object is not callable on celery task decorator

☆樱花仙子☆ 提交于 2019-12-10 03:48:57
问题 Trying out celery for django I ran into a problem with @task decorator. This is running on Windows 7. In my celerytest.tasks module I have the following code from celery import task @task def add(x,y): return x + y From the command prompt I run: python manage.py shell Trying to import my module from shell: from celerytest.tasks import add I get the following error: >>> from celerytest.tasks import add Traceback (most recent call last): File "<console>", line 1, in <module> File "d:\...

Run a Celery worker that connects to the Django Test DB

早过忘川 提交于 2019-12-10 03:32:25
问题 BACKGROUND: I'm working on a project that uses Celery to schedule tasks that will run at a certain time in the future. These tasks push the state of the Final State Machine forward. Here's an example: A future reminder is scheduled to be sent to the user in 2 days. When that scheduled task runs, an email is sent, and the FSM is advanced to the next state The next state is to schedule a reminder to run in another two days When this task runs, it will send another email, advance state Etc... I

Maximum clients reached on Heroku and Redistogo Nano

老子叫甜甜 提交于 2019-12-10 03:28:34
问题 I am using celerybeat on Heroku with RedisToGo Nano addon There is one web dyno and one worker dyno The celerybeat worker is set to perform a task every minute. The problem is: Whenever I deploy a new commit, dynos restart, and I get this error 2014-02-27T13:19:31.552352+00:00 app[worker.1]: Traceback (most recent call last): 2014-02-27T13:19:31.552352+00:00 app[worker.1]: File "/app/.heroku/python/lib/python2.7/site-packages/celery/worker/consumer.py", line 389, in start 2014-02-27T13:19:31

Get progress from async python celery chain by chain id

半腔热情 提交于 2019-12-10 02:36:35
问题 I'm trying to get the progress of a task chain by querying each task status. But when retrieving the chain by it's id, I get some object that behaves differently. In tasks.py from celery import Celery celery = Celery('tasks') celery.config_from_object('celeryconfig') def unpack_chain(nodes): while nodes.parent: yield nodes.parent nodes = nodes.parent yield nodes @celery.task def add(num, num2): return num + num2 When quering from ipython... In [43]: from celery import chain In [44]: from