celery

Prevent Celery Beat from running the same task

狂风中的少年 提交于 2019-12-07 09:30:34
问题 I have a scheduled celery running tasks every 30 seconds. I have one that runs as task daily, and another one that runs weekly on a user specified time and day of the week. It checks for the "start time" and the "next scheduled date". The next scheduled date does not update until the task is completed. However, I want to know how to make sure that the celery beat is only running the task once. I see that right now, celery will run a certain task multiple times until that task's next scheduled

Celery 'module' object has no attribute 'app' when using Python 3

安稳与你 提交于 2019-12-07 09:01:30
问题 I am going through Celery tutorial. They are using Python2 and I am trying to implement the same using python3. I have 2 files: celery_proj.py : from celery import Celery app = Celery( 'proj', broker='amqp://', backend='amqp://', include=['proj.tasks']) app.conf.update(Celery_TAST_RESULT_EXPIRES=3600,) if __name__ == '__main__': app.start() and tasks.py : from celery_proj import app @app.task def add(x, y): return x + y @app.task def mul(x, y): return x * y @app.task def xsum(numbers): return

Celery dynamic tasks / hiding Celery implementation behind an interface

走远了吗. 提交于 2019-12-07 08:50:57
问题 I am trying to figure out how to implement my asynchronous jobs with Celery, without tying them to the Celery implementation. If I have an interface that accepts objects to schedule, such as callables (Or an object that wraps a callable): ITaskManager(Interface): def schedule(task): #eventually run task And I might implement it with the treading module: ThreadingTaskManager(object) def schedule(task): Thread(task).start() # or similar But it seems this couldn't be done with celery, am I right

Celery task results not persisted with rpc

て烟熏妆下的殇ゞ 提交于 2019-12-07 08:25:37
问题 I have been trying to get Celery task results to be routed to another process by making results persisted to a queue and another process can pick results from queue. So, have configured Celery as CELERY_RESULT_BACKEND = 'rpc', but still Python function returned value is not persisted to queue. Not sure if any other configuration or code change required. Please help. Here is the code example: celery.py from __future__ import absolute_import from celery import Celery app = Celery('proj', broker

OperationFailure: database error when threading in MongoEngine/PyMongo

痞子三分冷 提交于 2019-12-07 06:40:15
问题 I have a function that will read data from a website, process it, and then load it into MongoDB. When I run this without threading it works fine but as soon as I set up celery tasks that just call this one function I frequently get the following error: "OperationFailure: database error: unauthorized db:dbname lock type:-1" It's somewhat odd because if I run the non-celery version on multiple terminals, I do not get this error at all. I suspect it has something to do with there not being an

Celery task state depends on CELERY_TASK_RESULT_EXPIRES

帅比萌擦擦* 提交于 2019-12-07 06:36:50
问题 From what I have seen, the task state depends entirely on the value set for CELERY_TASK_RESULT_EXPIRES - if I check the task state within this interval after the task has finished executing, the state returned by: AsyncResult(task_id).state is correct. If not, the state will not be updated and will remain forever PENDING. Can anyone explain me why does this happen? Is this a feature or a bug? Why is the task state depending on the result expiry time, even if I am ignoring results? (Celery

Celery design help: how to prevent concurrently executing tasks

◇◆丶佛笑我妖孽 提交于 2019-12-07 06:20:09
问题 I'm fairly new to Celery/AMQP and am trying to come up with a task/queue/worker design to meet the following requirements. I have multiple types of "per-user" tasks: e.g., TaskA, TaskB, TaskC. Each of these "per-user" tasks read/write data for one particular user in the system. So at any given time, I might need to create tasks User1_TaskA, User1_TaskB, User1_TaskC, User2_TaskA, User2_TaskB, etc. I need to ensure that, for each user , no two tasks of any task type execute concurrently. I want

Celery Closes Unexpectedly After Longer Inactivity

你离开我真会死。 提交于 2019-12-07 06:08:53
问题 So I am using a RabbitMQ + Celery to create a simple RPC architecture. I have one RabbitMQ message broker and one remote worker which runs Celery deamon. There is a third server which exposes a thin RESTful API. When it receives HTTP request, it sends a task to the remote worker, waits for response and returns a response. This works great most of the time. However I have notices that after a longer inactivity (say 5 minutes of no incoming requests), the Celery worker behaves strangely. First

Calling async_result.get() from within a celery task

别说谁变了你拦得住时间么 提交于 2019-12-07 06:04:25
问题 I have a celery task that calls another remote task (it's on a different celery app, in another server..). When I try to .get() the result of that remote task from within my task like this: @app.task() def my_local_task(): result_from_remote = app.send_task('remote_task', [arg1, arg2]) return result_from_remote.get() I get this error: RuntimeWarning: Never call result.get() within a task! See http://docs.celeryq.org/en/latest/userguide/tasks.html#task-synchronous-subtasks In Celery 3.2 this

Celery gives connection reset by peer

[亡魂溺海] 提交于 2019-12-07 05:15:15
问题 I setup the rabbitmqserver and added the users using the following steps: uruddarraju@*******:/usr/lib/rabbitmq/lib/rabbitmq_server-3.2.3$ sudo rabbitmqctl list_users Listing users ... guest [administrator] phantom [administrator] phantom1 [] sudo rabbitmqctl set_permissions -p phantom phantom1 ".*" ".*" ".*" uruddarraju@******:/usr/lib/rabbitmq/lib/rabbitmq_server-3.2.3$ sudo netstat -tulpn | grep :5672 tcp6 0 0 :::5672 :::* LISTEN 31341/beam.smp My celery config is like: BROKER_URL = 'amqp: