celery

Starting Celery with supervisord: AttributeError: 'module' object has no attribute 'celery'

放肆的年华 提交于 2019-12-11 18:00:06
问题 I used to have all my Flask app code and celery code in one file and it worked fine with supervisor. However, it is very hair so I split my tasks to celery_tasks.py and this problem occurs. In my project directory, I can start celery manually with the following command celery -A celery_tasks worker --loglevel=INFO However, because this is a server, I need celery to run as a daemon in background. But it shows following error when I called sudo supervisorctl restart celeryd celeryd: ERROR

Update tasks in Celery with RabbitMQ

梦想的初衷 提交于 2019-12-11 17:12:59
问题 I'm using Celery in my django project to create tasks to send email at a specific time in the future. User can create a Notification instance with notify_on datetime field. Then I pass value of notify_on as a eta . class Notification(models.Model): ... notify_on = models.DateTimeField() def notification_post_save(instance, *args, **kwargs): send_notification.apply_async((instance,), eta=instance.notify_on) signals.post_save.connect(notification_post_save, sender=Notification) The problem with

raise ConnectionError(self._error_message(e)) kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection refused

若如初见. 提交于 2019-12-11 16:59:12
问题 minimal django/celery/redis is running locally, but when deployed to heroku gives me the following error, when I run on python: raise ConnectionError(self._error_message(e)) kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection refused. This is my tasks.py file in my application directory: from celery import Celery import os app = Celery('tasks', broker='redis://localhost:6379/0') app.conf.update(BROKER_URL=os.environ['REDIS_URL'], CELERY_RESULT_BACKEND=os

celery .env variables not used in settings.py

我怕爱的太早我们不能终老 提交于 2019-12-11 15:52:14
问题 I am stuck on using config vars from my .env file inside the settings.py for my celery.py. when i hardcode CELERY_BROKER_URL = 'redis://localhost' , everything works, however, when i use CELERY_BROKER_URL= os.environ.get('REDIS_URL') , the REDIS_URL is not taken over and I get an error. celery.py: from __future__ import absolute_import, unicode_literals import os from celery import Celery # set the default Django settings module for the 'celery' program. os.environ['DJANGO_SETTINGS_MODULE'] =

Question: Usage of django celery.backend_cleanup

跟風遠走 提交于 2019-12-11 15:37:19
问题 There is not much documentation available for the actual usage of django celery.backend_cleanup Let's assume i have following 4 tasks scheduled with different interval Checking DatabaseScheduler Logs I had found that only Task1 is executing on interval. [2018-12-28 11:21:08,241: INFO/MainProcess] Writing entries... [2018-12-28 11:24:08,778: INFO/MainProcess] Writing entries... [2018-12-28 11:27:09,315: INFO/MainProcess] Writing entries... [2018-12-28 11:28:32,948: INFO/MainProcess] Scheduler:

Clarification of guide to Heroku Celery

家住魔仙堡 提交于 2019-12-11 15:26:49
问题 I'm trying to figure out the woeful instructions here Under the section "Configuring a Celery app" I'm not sure where i put the code: import os app.conf.update(BROKER_URL=os.environ['REDIS_URL'], CELERY_RESULT_BACKEND=os.environ['REDIS_URL']) Any clarification of these instructions is greatly appreciated. 回答1: The instructions are indicating you should put that code in your tasks.py module. However, that's not exactly extensible for multiple packages, each with their own tasks.py module. What

How to setup celery worker to log all task function calls to one file

那年仲夏 提交于 2019-12-11 15:20:56
问题 I have Django application with such logging configuration. LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'default': { 'format': '%(asctime)s [%(levelname)s] %(filename)s:%(lineno)s: %(message)s' }, }, 'handlers': { 'cron': { 'class': 'logging.FileHandler', 'filename': 'cron.log', 'formatter': 'default', }, 'admin': { 'class': 'logging.FileHandler', 'filename': 'admin.log', 'formatter': 'default', }, 'app': { 'class': 'logging.FileHandler', 'filename': 'app.log',

Celery tasks functions - web server vs remote server

╄→尐↘猪︶ㄣ 提交于 2019-12-11 14:48:53
问题 I'm willing to send tasks from a web server (running Django) to a remote machine that is holding a Rabbitmq server and some workers that I implemented with Celery. If I follow the Celery way to go, it seems I have to share the code between both machines, which means replicating the workers logic code in the web app code. So: Is there a best practice to do that? Since code is redundant, I am thinking about using a git submodule (=> replicated in the web app code repo, and in the workers code

Efficient recurring tasks in celery?

时光毁灭记忆、已成空白 提交于 2019-12-11 14:03:40
问题 I have ~250,000 recurring tasks each day; about a fifth of which might be updated with different scheduled datetimes each day. Can this be done efficiently in Celery? - I am worried about this from celery's beat.py: def tick(self): """Run a tick, that is one iteration of the scheduler. Executes all due tasks. """ remaining_times = [] try: for entry in values(self.schedule): next_time_to_run = self.maybe_due(entry, self.publisher) if next_time_to_run: remaining_times.append(next_time_to_run)

SQLAlchemy session handling in delayed Celery tasks

故事扮演 提交于 2019-12-11 13:58:08
问题 I am using a relational database via SQLAlchemy. I want to spawn a job that deals with databases using Celery. There is a code: from sqlalchemy.orm.session import Session from celery.task import task from myapp.user import User @task def job(user): # job... session = Session.object_session(user) with user.begin(): user.value = result_value def ordinary_web_request_handler(uid): assert isinstance(session, Session) user = session.query(User).get(int(uid)) # deals with user... job.delay(user)