celery

Celery Segmentation Fault

牧云@^-^@ 提交于 2019-12-10 17:39:25
问题 What can I do, when process (celery worker) causes a segmentation fault error? In my case problem raises in celery, but I don't know how to find which module(used in tasks) has a corrupted code. Link to some additional info about problem — https://github.com/ask/celery/issues/690. In other words which command in gdb could give useful info, or other recipes to resolve this problem. Thanks for your answers. 回答1: This might help: Install python-amqp package and remove python-librabbitmq On

Performing a blocking request in django view

混江龙づ霸主 提交于 2019-12-10 17:35:50
问题 In one of the views in my django application, I need to perform a relatively lengthy network IO operation. The problem is other requests must wait for this request to be completed even though they have nothing to do with it. I did some research and stumbled upon Celery but as I understand, it is used to perform background tasks independent of the request. (so I can not use the result of the task for the response to the request) Is there a way to process views asynchronously in django so while

ImportError: No module named dateutil

て烟熏妆下的殇ゞ 提交于 2019-12-10 17:13:05
问题 I am trying to follow the example in the "First Steps with Celery" document. I have installed Celery using pip. I created a file called tasks.py in ~/python/celery, and it contains the following: from celery import Celery celery = Celery('tasks', broker='amqp://guest@localhost//') @celery.task def add(x, y): return x + y I started a worker using celery -A tasks worker --loglevel=info while in the ~/python/celery directory, and it seems to be running. In a separate Terminal window, I launched

Celery time statistics per-task-name

做~自己de王妃 提交于 2019-12-10 16:49:08
问题 I have some fairly busy celery queues, but not sure which tasks are the problematic ones. Is there a way to aggregate results to figure out which tasks are taking a long time? I have 10-20 workers on 2-4 servers. Using redis as the broker and as the result backend as well. I noticed the busy queues on Flower, but can't figure out how to get time statistic aggregated per task. 回答1: Method 1: If you have enabled logging when celery workers are started, they log time taken for each task. $

RabbitMQ Queued messages keep increasing

穿精又带淫゛_ 提交于 2019-12-10 16:16:19
问题 We have a Windows based Celery/RabbitMQ server that executes long-running python tasks out-of-process for our web application. What this does, for example, is take a CSV file and process each line. For every line it books one or more records in our database. This seems to work fine, I can see the records being booked by the worker processes. However, when I check the rabbitMQ server with the management plugin (the web based management tool) I see the Queued messages increasing, and not coming

Diagnosing Memory leak in boto3

倖福魔咒の 提交于 2019-12-10 15:07:20
问题 I have a celery worker running on Elastic Beanstalk that polls a SQS queue, gets messages (containing S3 file names), downloads those files from S3 and processes them. My worker is scheduled to run at every 15 seconds but due to some reason the memory usage keeps on increasing with time. This is the code I'm using to access SQS def get_messages_from_sqs(queue_url, queue_region="us-west-2", number_of_messages=1): client = boto3.client('sqs', region_name=queue_region) sqs_response = client

'connection refused' with Celery

99封情书 提交于 2019-12-10 14:06:00
问题 I have a Django project on an Ubuntu EC2 node, which I have been using to set up an asynchronous using Celery . I am following How to list the queued items in celery? along with the docs, to experiment with celery at the command line. I've been able to get a basic task working at the command line, using: (env1)ubuntu@ip-172-31-22-65:~/projects/tp$ celery --app=myproject.celery:app worker --loglevel=INFO However, if I run other celery commands like below I'm getting the following: (env1)ubuntu

Can celery's beat tasks execute at timed intervals?

让人想犯罪 __ 提交于 2019-12-10 12:16:31
问题 This is the beat tasks setting: celery_app.conf.update( CELERYBEAT_SCHEDULE = { 'taskA': { 'task': 'crawlerapp.tasks.manual_crawler_update', 'schedule': timedelta(seconds=3600), }, 'taskB': { 'task': 'crawlerapp.tasks.auto_crawler_update_day', 'schedule': timedelta(seconds=3600), }, 'taskC': { 'task': 'crawlerapp.tasks.auto_crawler_update_hour', 'schedule': timedelta(seconds=3600), }, }) Normally taskA,taskB,taskC execute at the same time after my command celery -A myproj beat as the beat

Celery beat not starting EOFError('Ran out of input')

孤者浪人 提交于 2019-12-10 11:02:03
问题 Everything worked perfectly fine until: celery beat v3.1.18 (Cipater) is starting. __ - ... __ - _ Configuration -> . broker -> amqp://user:**@staging-api.user-app.com:5672// . loader -> celery.loaders.app.AppLoader . scheduler -> celery.beat.PersistentScheduler . db -> /tmp/beat.db . logfile -> [stderr]@%INFO . maxinterval -> now (0s) [2015-09-25 17:29:24,453: INFO/MainProcess] beat: Starting... [2015-09-25 17:29:24,457: CRITICAL/MainProcess] beat raised exception <class 'EOFError'>:

Celery Production Graceful Restart

纵饮孤独 提交于 2019-12-10 10:59:21
问题 I need to restart the celery daemon but I need it to tell the current workers to shutdown as their tasks complete and then spin up a new set of workers while the old ones are still shutting down. The current graceful option on the daemon waits for all tasks to complete before restarting which is not useful when you have long running jobs. Please do not suggest autoreload as it is currently undocumented in 4.0.2. 回答1: Alright well what I ended up doing was using supervisord and ansible to