celery-task

celery daemon production local config file without django

狂风中的少年 提交于 2019-12-18 07:04:07
问题 I am newbie to Celery. I create a project as per instruction provided by the celery4.1 docs.Below is my project folder and files: mycelery | test_celery | celery_app.py tasks.py __init__.py 1-celery_app.py from __future__ import absolute_import import os from celery import Celery from kombu import Queue, Exchange from celery.schedules import crontab import datetime app = Celery('test_celery', broker='amqp://jimmy:jimmy123@localhost/jimmy_v_host', backend='rpc://', include=['test_celery.tasks'

Celery discovers all tasks even when `app.autodiscover_tasks()` is not called

大憨熊 提交于 2019-12-13 03:35:12
问题 I am using Django==2.0.5 and celery==4.0.2 . My proj/proj/celery.py looks like: from __future__ import absolute_import, unicode_literals import os from celery import Celery # set the default Django settings module for the 'celery' program. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings') app = Celery('proj', include=[]) # Using a string here means the worker doesn't have to serialize # the configuration object to child processes. # - namespace='CELERY' means all celery-related

Celery single task persistent data

瘦欲@ 提交于 2019-12-12 15:15:58
问题 Lets say a single task is enough for a machine to stay very busy for a few minutes. I want to get the result of the task, then depending on the result, have the worker perform the same task again. The question I cannot find an answer to is this: Can I keep data in memory on the worker machine in order to use it on the next task? 回答1: Yes you can. The documentation (http://docs.celeryproject.org/en/latest/userguide/tasks.html#instantiation) is a bit vague and I'm not sure if this is the best

Django 1.6 + RabbitMQ 3.2.3 + Celery 3.1.9 - why does my celery worker die with: WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV)

核能气质少年 提交于 2019-12-12 11:20:53
问题 This seems to address a very similar issue, but doesn't give me quite enough insight: https://github.com/celery/billiard/issues/101 Sounds like it might be a good idea to try a non-SQLite database... I have a straightforward celery setup with my django app. In my settings.py file I set a task to run as follows: CELERYBEAT_SCHEDULE = { 'sync_database': { 'task': 'apps.data.tasks.celery_sync_database', 'schedule': timedelta(minutes=5) } } I have followed the instructions here: http://celery

Stopping celery task gracefully

一曲冷凌霜 提交于 2019-12-12 09:52:01
问题 I'd like to quit a celery task gracefully (i.e. not by calling revoke(celery_task_id, terminate=True) ). I thought I'd send a message to the task that sets a flag, so that the task function can return. What's the best way to communicate with a task? 回答1: Use signals for this. Celery's revoke is the right choice; it uses SIGTERM by default, but you can specify another using the signal argument, if you prefer. Just set a signal handler for it in your task (using the signal module) that

Kombu/Celery messaging

这一生的挚爱 提交于 2019-12-12 03:37:14
问题 I have a simple application that sends & receives messages, kombu, and uses Celery to task the message. Kombu alon, I can receive the message properly. when I send "Hello", kombu receives "Hello". But when I added the task, what kombu receives is the task ID of the celery. My purpose for this project is so that I can schedule when to send and receive messages, hence Celery. What I would like to know is why is kombu receiving the task id instead of the sent message? I have searched and

Question: Usage of django celery.backend_cleanup

跟風遠走 提交于 2019-12-11 15:37:19
问题 There is not much documentation available for the actual usage of django celery.backend_cleanup Let's assume i have following 4 tasks scheduled with different interval Checking DatabaseScheduler Logs I had found that only Task1 is executing on interval. [2018-12-28 11:21:08,241: INFO/MainProcess] Writing entries... [2018-12-28 11:24:08,778: INFO/MainProcess] Writing entries... [2018-12-28 11:27:09,315: INFO/MainProcess] Writing entries... [2018-12-28 11:28:32,948: INFO/MainProcess] Scheduler:

Efficient recurring tasks in celery?

时光毁灭记忆、已成空白 提交于 2019-12-11 14:03:40
问题 I have ~250,000 recurring tasks each day; about a fifth of which might be updated with different scheduled datetimes each day. Can this be done efficiently in Celery? - I am worried about this from celery's beat.py: def tick(self): """Run a tick, that is one iteration of the scheduler. Executes all due tasks. """ remaining_times = [] try: for entry in values(self.schedule): next_time_to_run = self.maybe_due(entry, self.publisher) if next_time_to_run: remaining_times.append(next_time_to_run)

Celery Python revoke

落爺英雄遲暮 提交于 2019-12-11 10:54:10
问题 software -> celery:3.1.20 (Cipater) kombu:3.0.35 py:2.7.6 billiard:3.3.0.22 py-amqp:1.4.9 platform -> system:Linux arch:64bit, ELF imp:CPython loader -> celery.loaders.default.Loader settings -> transport:amqp results:amqp Currently I have the following function: task(bind=True, default_retry_delay=300, max_retries=3) def A(self,a,b,c,**kwargs) . B() . . . code This is the function I call to cancel A. task(bind=True, default_retry_delay=300, max_retries=3) def cancelA(self,a,b,c,**kwargs)

ImportError: cannot import custom module to celery tasks, how to improve?

为君一笑 提交于 2019-12-11 09:14:22
问题 I need to import a model from my application, then make a request, and send a sms, but I can not import my model, although the name is specified correctly, who can help ask, I will wait, thank you all! Full traceback > Traceback (most recent call last): File > "c:\users\p.a.n.d.e.m.i.c\appdata\local\programs\python\python36-32\Lib\runpy.py", > line 193, in _run_module_as_main > "__main__", mod_spec) File "c:\users\p.a.n.d.e.m.i.c\appdata\local\programs\python\python36-32\Lib\runpy.py", > line