celery

Celery-系统守护进程

北战南征 提交于 2019-12-02 05:47:42
1. 使用systemd控制Celery 用法: systemctl {start|stop|restart|status} celery.service 配置文件: /etc/celery/celery.conf celery服务文件: /etc/systemd/system/celery.service celery beat服务文件: /etc/systemd/system/celerybeat.service 服务文件: /etc/systemd/system/celery.service [Unit] Description=Celery Service After=network.target [Service] Type=forking User=celery Group=celery EnvironmentFile=/etc/celery/celery.conf WorkingDirectory=/app/celery ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \ -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \ --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL

异步任务报错-Celery: WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL)

本秂侑毒 提交于 2019-12-02 04:49:30
现象:   异步任务:     测试环境正常,线上环境报错 使用celery 进行后端异步任务时,报错: Celery: WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL) 网上搜索答案: https://intellipaat.com/community/6094/celery-workerlosterror-worker-exited-prematurely-signal-9-sigkill 主要原因是线程提前被杀死,异步任务未完成就退出了, 尝试修改supervisor中celery启动方式,无效 最终解决方案:   升级依赖包版本号,   pip uninstall celery   pip install celery==x.x.x   同时也需要关注redis 的版本号,也可通过pip install -r requirements.txt对依赖包进行全部升级    来源: https://www.cnblogs.com/zj-Rules/p/11730479.html

How can I log from my python application to splunk, if I use celery as my task scheduler?

杀马特。学长 韩版系。学妹 提交于 2019-12-02 04:25:25
问题 I have a python script running on a server, that should get executed once a day by the celery scheduler. I want to send my logs directly from the script to splunk. I am trying to use this splunk_handler library. If I run the splunk_handler without celery locally, it seems to work. But if I run it together with celery there seem to be no logs that reach the splunk_handler. Console-Log: [SplunkHandler DEBUG] Timer thread executed but no payload was available to send How do I set up the loggers

How to run celery workers by superuser?

淺唱寂寞╮ 提交于 2019-12-02 03:19:11
When I run celery workers with sudo, i get the following error: Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea! If you really want to continue then you have to set the C_FORCE_ROOT environment variable (but please think about this before you do). User information: uid=0 euid=0 gid=0 egid=0 also my C_FORCE_ROOT environment variable is true : echo $C_FORCE_ROOT true More information: Python --> 2.7.6 celery --> 3.1.23 (Cipater) OS --> Linux(Ubuntu 14.04) How should I run celery with sudo? I know it's not secure but for some

How can I log from my python application to splunk, if I use celery as my task scheduler?

谁说我不能喝 提交于 2019-12-02 02:05:36
I have a python script running on a server, that should get executed once a day by the celery scheduler. I want to send my logs directly from the script to splunk. I am trying to use this splunk_handler library. If I run the splunk_handler without celery locally, it seems to work. But if I run it together with celery there seem to be no logs that reach the splunk_handler. Console-Log: [SplunkHandler DEBUG] Timer thread executed but no payload was available to send How do I set up the loggers correctly, so that all the logs go to the splunk_handler? Apparently, celery sets up its own loggers

Django - Should external API requests always be made through a task handler (e.g. Celery)?

别说谁变了你拦得住时间么 提交于 2019-12-01 23:35:16
I have a Django app where I have created a custom middleware. It works as follows: The middleware intercepts a token (which identifies the users) within each request, and makes a request to an external API with that token. The external API returns what permissions the user making the original request has. The middleware completes, and the user gets data returned based on its permissions This is my question: Because my app has to wait for the API request to return before it can process the request, does it still make sense to use a task queue such as celery? Wouldn't it still have to block the

Django Celery cache lock did not work?

帅比萌擦擦* 提交于 2019-12-01 22:43:03
I am trying to use Django cache to implement a lock mechanism. In Celery offical site , it claimed Django cache work fine for this. However, in my experence, it did not work. My experience is that if there are multiple threads/processes acquire the lock in almost the same time (close to ~0.003 second), all threads/processes will get the lock successfully. For other threads which acquire lock later than ~0.003 second, it fails. Am I the only person experienced this? Please correct me if possible. def acquire(self, block = False, slp_int = 0.001): while True: added = cache.add(self.ln, 'true',

celery beat 进行定时任务

风流意气都作罢 提交于 2019-12-01 22:01:50
celery beat是用来开启定时任务调度的,一般用法为:启动celery beat,然后启动worker,让beat去调用worker里面的任务 一般我们在代码里面通过model层的插入直接就可以新建定时任务 schedule,created = IntervalSchedule.objects.get_or_create( every = 10, period = IntervalSchedule.SECONDS ) PeriodicTask.objects.create( interval = schedule, name = random.random(), task = "adv_celery.tasks.task1.tasks.test", #添加参数 args = json.dumps(["hello "]), kwargs = json.dumps({}), #expires = datetime.datetime.utcnow() + datetime.timedelta(seconds=30) ) 由于我们使用的数据库插入模式,记得配置 CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler' 然后我们在adv_celery.tasks.task1

Airflow 1.10 - Scheduler Startup Fails

偶尔善良 提交于 2019-12-01 20:18:59
I've just painfully installed Airflow 1.10 thanks to my previous post here . We have a single ec2-instance running, our queue is AWS Elastic Cache Redis, and our meta database is AWS RDS for PostgreSQL. Airflow works with this setup just fine when we are on Airflow version 1.9. But we are encountering an issue on Airflow version 1.10 when we go to start up the scheduler. [2018-08-15 16:29:14,015] {jobs.py:385} INFO - Started process (PID=15778) to work on /home/ec2-user/airflow/dags/myDag.py [2018-08-15 16:29:14,055] {jobs.py:1782} INFO - Processing file /home/ec2-user/airflow/dags/myDag.py

浅谈celery的坑

 ̄綄美尐妖づ 提交于 2019-12-01 20:15:52
celery celery的使用以及在Django中的配置,不详细介绍,主要记录在Django中使用的坑点。 坑点 时区问题 celery默认的时区是世界标准时间,比东八区慢了8个小时,如果发布定时任务,一定要注意定时的时间,否则可能用了正确的方法,但是并没有调用成功 设置celery的时区可以在Django项目的 settings.py 中添加一条设置 CELERY_TIMEZONE = 'Asia/Shanghai' django-celery 可以识别在设置中的时区 也可以在发布定时任务的时候,指定到当前的时区,使用Django自带的 get_current_timezone() # 将需要设定的时间转换成当前时区的时间 from django.utils.timezone import get_current_timezone import datetime send_time = datetime.datetime.now() + datetime.timedelta(days=1) tz = get_current_timezone() send_time = tz.localize(send_time) 在使用异步任务的时候将转换后的时间传入到参数里面 celery_task.apply_async(args=[], kwdg={}, eta=send_time)