Celery systemd proper configuration for two applications to use the same daemon service

[亡魂溺海] 提交于 2020-01-06 04:31:06

问题


With some insights from my prev question, I reconfigured my celery to run as a daemon with systemd, but I am still facing issues configuring it for multiple apps. Celery documentation (which shows how to daemonize for a single app) is insufficient for me to understand about multiple apps. And I am less experienced with daemonizing anything.

So far, this is my configuration for the service to enable both the applications to use it.

/etc/conf.d/celery

CELERYD_NODES="w1 w2 w3"

# Absolute or relative path to the 'celery' command:
CELERY_BIN_appA="/var/www/appA/public_html/venv/bin/celery"
CELERY_BIN_appB="/var/www/appB/public_html/venv/bin/celery"

# App instances
CELERY_APP_appA="appA.celery"
CELERY_APP_appB="appB.celery"

# How to call manage.py
CELERYD_MULTI="multi"

# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"

# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
#   and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"

# Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"

/etc/systemd/system/celery.service

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=myuser
Group=www-data
EnvironmentFile=/etc/conf.d/celery
ExecStart=/bin/bash -c '${CELERY_BIN_appA} multi start ${CELERYD_NODES} \
  -A ${CELERY_APP_appA} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} --workdir=/var/www/appA/public_html/ ${CELERYD_OPTS} && ${CELERY_BIN_appB} multi start ${CELERYD_NODES} \
  -A ${CELERY_APP_appB} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} --workdir=/var/www/appB/public_html/ ${CELERYD_OPTS}'
ExecStop=/bin/bash -c '${CELERY_BIN_appA} multi stopwait ${CELERYD_NODES} \
  --pidfile=${CELERYD_PID_FILE} && ${CELERY_BIN_appB} multi stopwait ${CELERYD_NODES} \
  --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/bash -c '${CELERY_BIN_appA} multi restart ${CELERYD_NODES} \
  -A ${CELERY_APP_appA} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS} && ${CELERY_BIN_appB} multi restart ${CELERYD_NODES} \
  -A ${CELERY_APP_appB} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'

[Install]
WantedBy=multi-user.target

When I try to start the service, I get an OOM.

Traceback:

● celery.service - Celery Service
   Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2019-12-30 18:31:02 IST; 16s ago
  Process: 28806 ExecStart=/bin/bash -c ${CELERY_BIN_appA} multi start ${CELERYD_NODES}    -A ${CELERY_APP_appA} --pidfile=${CELERYD_PID_FILE}    --logfile=${CELERYD_LOG_FILE} --log

Dec 30 18:31:00 claudia bash[28806]:   File "/var/www/appB/public_html/venv/lib/python3.6/site-packages/celery/apps/multi.py", line 196, in _waitexec
Dec 30 18:31:00 claudia bash[28806]:     pipe = Popen(argstr, env=env)
Dec 30 18:31:00 claudia bash[28806]:   File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
Dec 30 18:31:00 claudia bash[28806]:     restore_signals, start_new_session)
Dec 30 18:31:00 claudia bash[28806]:   File "/usr/lib/python3.6/subprocess.py", line 1295, in _execute_child
Dec 30 18:31:00 claudia bash[28806]:     restore_signals, start_new_session, preexec_fn)
Dec 30 18:31:00 claudia bash[28806]: OSError: [Errno 12] Cannot allocate memory
Dec 30 18:31:00 claudia systemd[1]: celery.service: Control process exited, code=exited status=1
Dec 30 18:31:02 claudia systemd[1]: celery.service: Failed with result 'exit-code'.
Dec 30 18:31:02 claudia systemd[1]: Failed to start Celery Service.

Please break down the process for me and help me understand what is wrong here and how to configure this properly.


回答1:


You should have two separate systemd scripts for different Celery workers, something like celery-appA.service and celery-appB.service. Also, you do not need /bin/bash -c to run the worker. Instead create a virtual environment and use the full path to the Celery script in the environment. Let's assume you have created a virtual environment in /opt/celery/venv and installed Celery there (with something like /opt/celery/bin/pip3 install celery[redis,msgpack]). Then instead of /bin/bash -c ... you can simply do /opt/celery/venv/bin/celery worker -A ....

Before you run Celery worker check what is using the memory. It could be that some old Celery workers are still running, and consume your system resources.



来源:https://stackoverflow.com/questions/59531542/celery-systemd-proper-configuration-for-two-applications-to-use-the-same-daemon

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!