Issues with Celery configuration on AWS Elastic Beanstalk - “No config updates to processes”

匿名 (未验证) 提交于 2019-12-03 02:38:01

问题:

I've a Django 2 application deployed on AWS Elastic Beanstalk and I'm trying to configure Celery in order to exec async tasks on the same machine.

My files:

02_packages.config

files:   "/usr/local/share/pycurl-7.43.0.tar.gz" :     mode: "000644"     owner: root     group: root     source: https://pypi.python.org/packages/source/p/pycurl/pycurl-7.43.0.tar.gz  packages:   yum:     python34-devel: []     libcurl-devel: []  commands:   01_download_pip3:     # run this before PIP installs requirements as it needs to be compiled with OpenSSL     command: 'curl -O https://bootstrap.pypa.io/get-pip.py'   02_install_pip3:     # run this before PIP installs requirements as it needs to be compiled with OpenSSL     command: 'python3 get-pip.py'  container_commands:   03_pycurl_reinstall:     # run this before PIP installs requirements as it needs to be compiled with OpenSSL     # the upgrade option is because it will run after PIP installs the requirements.txt file.     # and it needs to be done with the virtual-env activated     command: 'source /opt/python/run/venv/bin/activate && pip3 install /usr/local/share/pycurl-7.43.0.tar.gz --global-option="--with-nss" --upgrade' 

03_django.config

container_commands:   01_migrate_db:     command: "django-admin.py migrate --noinput"     leader_only: true   02_createsu: # custom django-admin command to create the "admin" superuser     command: "source /opt/python/run/venv/bin/activate && python manage.py createsu"     leader_only: true   03_update_permissions: # custom django-admin command to update user perms     command: "source /opt/python/run/venv/bin/activate && python manage.py update_permissions"     leader_only: true   04_collectstatic:     command: "django-admin.py collectstatic --noinput"   05_pip_upgrade:     command: /opt/python/run/venv/bin/pip install --upgrade pip     ignoreErrors: false  option_settings:   aws:elasticbeanstalk:application:environment:     DJANGO_SETTINGS_MODULE: "my_proj.settings_prod"     APP_ENV: "test"     PYCURL_SSL_LIBRARY: "nss"   aws:elasticbeanstalk:container:python:     WSGIPath: myproj/wsgi.py     NumProcesses: 3     NumThreads: 20   aws:elasticbeanstalk:container:python:staticfiles:     "/static/": "static/" 

requirements.txt

boto3==1.6.3 botocore==1.9.3 Django==2.0.3 django-cors-headers==2.2.0 django-filter==1.1.0 django-storages==1.6.5 djangorestframework==3.7.7 djangorestframework-jwt==1.11.0 docutils==0.14 jmespath==0.9.3 Markdown==2.6.11 olefile==0.44 Pillow==5.0.0 psycopg2==2.7.3.2 PyJWT==1.5.3 python-dateutil==2.6.1 pytz==2018.3 reportlab==3.4.0 s3transfer==0.1.13 six==1.11.0 Wand==0.4.4 uwsgi==2.0.17 # WSGI for production deployment gevent==1.2.2 # Non-blocking Python network library, required by uWSGI celery==4.1.0 django_celery_beat==1.1.1 django_celery_results==1.0.1 

celery_conf/config.py

AWS_ACCESS_KEY_ID = ... AWS_SECRET_ACCESS_KEY = ...  CELERY_BROKER_TRANSPORT = 'sqs' CELERY_BROKER_URL = 'sqs://' # 'sqs://%s:%s@' % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)  CELERY_BROKER_USER = AWS_ACCESS_KEY_ID CELERY_BROKER_PASSWORD = AWS_SECRET_ACCESS_KEY CELERY_WORKER_STATE_DB = '/var/run/celery/worker.db' CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler' CELERY_WORKER_PREFETCH_MULTIPLIER = 0 # See https://github.com/celery/celery/issues/3712  CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_RESULT_SERIALIZER = 'json' CELERY_TASK_SERIALIZER = 'json'  CELERY_DEFAULT_QUEUE = 'myproj-django' # Queue name CELERY_QUEUES = {     CELERY_DEFAULT_QUEUE: {         'exchange': CELERY_DEFAULT_QUEUE,         'binding_key': CELERY_DEFAULT_QUEUE,     } }  CELERY_BROKER_TRANSPORT_OPTIONS = {     "region": "us-east-1", # US East (N. Virginia)     'visibility_timeout': 360,     'polling_interval': 1 }  CELERY_RESULT_BACKEND = 'django-db' 

myproj/celery.py

from __future__ import absolute_import, unicode_literals import os  from celery import Celery from celery.schedules import crontab  # set the default Django settings module for the 'celery' program. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproj.settings_prod')  app = Celery('myproj')  # Using a string here means the worker doesn't have to serialize # the configuration object to child processes. # - namespace='CELERY' means all celery-related configuration keys #   should have a `CELERY_` prefix. app.config_from_object('django.conf:settings', namespace='CELERY')  # Load task modules from all registered Django app configs. app.autodiscover_tasks()  if __name__ == '__main__':     app.start()  @app.task(bind=True) def debug_task(self):         print('Request: {0!r}'.format(self.request)) 

myproj/myapp/tasks.py

from __future__ import absolute_import, unicode_literals from celery.decorators import task  from celery.utils.log import get_task_logger logger = get_task_logger(__name__)  @task() def do_something():     logger.info('******** CALLING ASYNC TASK WITH CELERY **********') 

settings_prod.py

# Importing base settings from .settings import *  DEBUG = False  # Importing Celery configurations from celery_conf.config import * INSTALLED_APPS += ('django_celery_beat',) 

UPDATE 1

Since according to /var/log/celery-beat.log, it seems that celery is not able to find my project module. I think my project structure is not the one that Celery is expecting. How I can make it works without changing the whole project structure?

My project structure is the following:

-- myprof-folder/    -- requirements.txt    -- .ebextensions/    -- celery_conf/       -- __init__.py       -- config.py    -- myproj/       -- __init__.py       -- settings.py # base settings       -- settings_prod.py # production settings       -- urls.py       -- wsgi.py       -- myapp1/          -- models.py          -- urls.py          -- apps.py          -- views.py          -- tasks.py # here my app's tasks          -- ...       -- myapp2/       -- myapp3/       -- ...       -- myappN/ 

UPDATE 2

99_celery.config was using the --workdir option with /tmp as directory. That option is not needed. I also applied a few changes to that file.

99_celery.config

files:   "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":     mode: "000755"     owner: root     group: root     content: |       #!/usr/bin/env bash        # Create required directories       sudo mkdir -p /var/log/celery/       sudo mkdir -p /var/run/celery/        # Create group called 'celery'       sudo groupadd -f celery       # add the user 'celery' if it doesn't exist and add it to the group with same name       id -u celery &>/dev/null || sudo useradd -g celery celery       # add permissions to the celery user for r+w to the folders just created       sudo chown -R celery:celery /var/log/celery/       sudo chown -R celery:celery /var/run/celery/        # Get django environment variables       celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`       celeryenv=${celeryenv%?}        # Create celery configuration script       celeryconf="[program:celeryd-worker]       ; Set full path to celery program if using virtualenv       command=/opt/python/run/venv/bin/celery worker -A myproj --loglevel=INFO --logfile="/var/log/celery/%%n%%I.log" --pidfile="/var/run/celery/%%n.pid"        directory=/opt/python/current/app       user=celery       numprocs=1       stdout_logfile=/var/log/celery-worker.log       stderr_logfile=/var/log/celery-worker.log       autostart=true       autorestart=true       startsecs=10        ; Need to wait for currently executing tasks to finish at shutdown.       ; Increase this if you have very long running tasks.       stopwaitsecs = 600        ; When resorting to send SIGKILL to the program to terminate it       ; send SIGKILL to its whole process group instead,       ; taking care of its children as well.       killasgroup=true        ; if rabbitmq is supervised, set its priority higher       ; so it starts first       priority=998        environment=$celeryenv        [program:celeryd-beat]       ; Set full path to celery program if using virtualenv       command=/opt/python/run/venv/bin/celery beat -A myproj --loglevel=INFO --logfile="/var/log/celery/celery-beat.log" --pidfile="/var/run/celery/celery-beat.pid"        directory=/opt/python/current/app       user=celery       numprocs=1       stdout_logfile=/var/log/celery-beat.log       stderr_logfile=/var/log/celery-beat.log       autostart=true       autorestart=true       startsecs=10        ; Need to wait for currently executing tasks to finish at shutdown.       ; Increase this if you have very long running tasks.       stopwaitsecs = 600        ; When resorting to send SIGKILL to the program to terminate it       ; send SIGKILL to its whole process group instead,       ; taking care of its children as well.       killasgroup=true        ; if rabbitmq is supervised, set its priority higher       ; so it starts first       priority=998        environment=$celeryenv"        # Create the celery supervisord conf script       echo "$celeryconf" | tee /opt/python/etc/celery.conf        # Add configuration script to supervisord conf (if not there already)       if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf         then         echo "[include]" | tee -a /opt/python/etc/supervisord.conf         echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf       fi        # Enable supervisor to listen for HTTP/XML-RPC requests.       # supervisorctl will use XML-RPC to communicate with supervisord over port 9001.       # Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection       if ! grep -Fxq "[inet_http_server]" /opt/python/etc/supervisord.conf         then         echo "[inet_http_server]" | tee -a /opt/python/etc/supervisord.conf         echo "port = 127.0.0.1:9001" | tee -a /opt/python/etc/supervisord.conf       fi        # Reread the supervisord config       supervisorctl -c /opt/python/etc/supervisord.conf reread        # Update supervisord in cache without restarting all services       supervisorctl -c /opt/python/etc/supervisord.conf update        # Start/Restart celeryd through supervisord       supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat       supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker   container_commands:   00_celery_tasks_run:     command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"     leader_only: true 

My logs:

I SSH my EC2 instance and the following are the log files:

/var/log/celery-worker.log

Traceback (most recent call last):   File "/opt/python/run/venv/bin/celery", line 11, in <module>     sys.exit(main())   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/__main__.py", line 14, in main     _main()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/celery.py", line 326, in main     cmd.execute_from_commandline(argv)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in execute_from_commandline     super(CeleryCommand, self).execute_from_commandline(argv)))   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/base.py", line 279, in execute_from_commandline     argv = self.setup_app_from_commandline(argv)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/base.py", line 481, in setup_app_from_commandline     self.app = self.find_app(app)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/base.py", line 503, in find_app     return find_app(app, symbol_by_name=self.symbol_by_name)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/app/utils.py", line 355, in find_app     sym = symbol_by_name(app, imp=imp)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/base.py", line 506, in symbol_by_name     return imports.symbol_by_name(name, imp=imp)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/kombu/utils/imports.py", line 56, in symbol_by_name     module = imp(module_name, package=package, **kwargs)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/utils/imports.py", line 101, in import_from_cwd     return imp(module, package=package)   File "/opt/python/run/venv/lib64/python3.6/importlib/__init__.py", line 126, in import_module     return _bootstrap._gcd_import(name[level:], package, level)   File "<frozen importlib._bootstrap>", line 978, in _gcd_import   File "<frozen importlib._bootstrap>", line 961, in _find_and_load   File "<frozen importlib._bootstrap>", line 948, in _find_and_load_unlocked ModuleNotFoundError: No module named 'myproj' 

/var/log/celery-beat.log

Traceback (most recent call last):   File "/opt/python/run/venv/bin/celery", line 11, in <module>     sys.exit(main())   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/__main__.py", line 14, in main     _main()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/celery.py", line 326, in main     cmd.execute_from_commandline(argv)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in execute_from_commandline     super(CeleryCommand, self).execute_from_commandline(argv)))   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/base.py", line 279, in execute_from_commandline     argv = self.setup_app_from_commandline(argv)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/base.py", line 481, in setup_app_from_commandline     self.app = self.find_app(app)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/base.py", line 503, in find_app     return find_app(app, symbol_by_name=self.symbol_by_name)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/app/utils.py", line 355, in find_app     sym = symbol_by_name(app, imp=imp)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bin/base.py", line 506, in symbol_by_name     return imports.symbol_by_name(name, imp=imp)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/kombu/utils/imports.py", line 56, in symbol_by_name     module = imp(module_name, package=package, **kwargs)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/utils/imports.py", line 101, in import_from_cwd     return imp(module, package=package)   File "/opt/python/run/venv/lib64/python3.6/importlib/__init__.py", line 126, in import_module     return _bootstrap._gcd_import(name[level:], package, level)   File "<frozen importlib._bootstrap>", line 978, in _gcd_import   File "<frozen importlib._bootstrap>", line 961, in _find_and_load   File "<frozen importlib._bootstrap>", line 948, in _find_and_load_unlocked ModuleNotFoundError: No module named 'myproj' celery beat v4.1.0 (latentcall) is starting. __    -    ... __   -        _ LocalTime -> 2018-04-30 19:09:23 Configuration ->     . broker -> sqs://AKIAJDSLHYFOJ6MYJZ5Q:**@localhost//     . loader -> celery.loaders.app.AppLoader     . scheduler -> django_celery_beat.schedulers.DatabaseScheduler      . logfile -> /var/log/celery/celery-beat.log@%INFO     . maxinterval -> 5.00 seconds (5s) 

/var/log/celery/celery.log

[2018-04-30 19:09:24,049: CRITICAL/MainProcess] Unrecoverable error: RuntimeError("Model class django_celery_results.models.TaskResult doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.",) Traceback (most recent call last):   File "/opt/python/run/venv/local/lib/python3.6/site-packages/kombu/utils/objects.py", line 42, in __get__     return obj.__dict__[self.__name__] KeyError: 'backend'  During handling of the above exception, another exception occurred:  Traceback (most recent call last):   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/worker/worker.py", line 203, in start     self.blueprint.start(self)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/bootsteps.py", line 115, in start     self.on_start()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/apps/worker.py", line 143, in on_start     self.emit_banner()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/apps/worker.py", line 158, in emit_banner     ' \n', self.startup_info(artlines=not use_image))),   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/apps/worker.py", line 221, in startup_info     results=self.app.backend.as_uri(),   File "/opt/python/run/venv/local/lib/python3.6/site-packages/kombu/utils/objects.py", line 44, in __get__     value = obj.__dict__[self.__name__] = self.__get(obj)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/app/base.py", line 1183, in backend     return self._get_backend()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/app/base.py", line 901, in _get_backend     self.loader)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/app/backends.py", line 66, in by_url     return by_name(backend, loader), url   File "/opt/python/run/venv/local/lib/python3.6/site-packages/celery/app/backends.py", line 46, in by_name     cls = symbol_by_name(backend, aliases)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/kombu/utils/imports.py", line 56, in symbol_by_name     module = imp(module_name, package=package, **kwargs)   File "/opt/python/run/venv/lib64/python3.6/importlib/__init__.py", line 126, in import_module     return _bootstrap._gcd_import(name[level:], package, level)   File "<frozen importlib._bootstrap>", line 978, in _gcd_import   File "<frozen importlib._bootstrap>", line 961, in _find_and_load   File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked   File "<frozen importlib._bootstrap>", line 655, in _load_unlocked   File "<frozen importlib._bootstrap_external>", line 678, in exec_module   File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django_celery_results/backends/__init__.py", line 4, in <module>     from .database import DatabaseBackend   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django_celery_results/backends/database.py", line 7, in <module>     from ..models import TaskResult   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django_celery_results/models.py", line 17, in <module>     class TaskResult(models.Model):   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/models/base.py", line 108, in __new__     "INSTALLED_APPS." % (module, name) RuntimeError: Model class django_celery_results.models.TaskResult doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS. 

/var/log/celery/celery-beat.log

Traceback (most recent call last):   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection     self.connect()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 194, in connect     self.connection = self.get_new_connection(conn_params)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 168, in get_new_connection     connection = Database.connect(**conn_params)   File "/opt/python/run/venv/local/lib64/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect     conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused     Is the server running on host "127.0.0.1" and accepting     TCP/IP connections on port 5432?   The above exception was the direct cause of the following exception:  Traceback (most recent call last):   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django_celery_beat/schedulers.py", line 238, in sync     with transaction.atomic():   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/transaction.py", line 147, in __enter__     if not connection.get_autocommit():   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 378, in get_autocommit     self.ensure_connection()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection     self.connect()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/utils.py", line 89, in __exit__     raise dj_exc_value.with_traceback(traceback) from exc_value   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection     self.connect()   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 194, in connect     self.connection = self.get_new_connection(conn_params)   File "/opt/python/run/venv/local/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 168, in get_new_connection     connection = Database.connect(**conn_params)   File "/opt/python/run/venv/local/lib64/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect     conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError: could not connect to server: Connection refused     Is the server running on host "127.0.0.1" and accepting     TCP/IP connections on port 5432? 

回答1:

So there were multiple issues with your configs

1. Incorrect class for celery

command=/opt/python/run/venv/bin/celery worker -A myproj --loglevel=INFO --logfile="/var/log/celery/%%n%%I.log" --pidfile="/var/run/celery/%%n.pid" 

As per your structure it should have been

command=/opt/python/run/venv/bin/celery worker -A celery_conf.celery_app:app --loglevel=INFO --logfile="/var/log/celery/%%n%%I.log" --pidfile="/var/run/celery/%%n.pid" 

2. Unicode dash instead of -

You had a unicode en-dash or something in the config, may be you copied from some website and it was not - in your below command

command=/opt/python/run/venv/bin/celery worker -A myproj --loglevel=INFO --logfile="/var/log/celery/%%n%%I.log" --pidfile="/var/run/celery/%%n.pid" 

3. Not append celerybeat.conf and celery.conf

Below code was not working

if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf  then     echo "[include]" | tee -a /opt/python/etc/supervisord.conf     echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf fi 

As include was already a part from uswgi.conf

4. Missing inet_server config

Although you had below in your question

if ! grep -Fxq "[inet_http_server]" /opt/python/etc/supervisord.conf     then         echo "[inet_http_server]" | tee -a /opt/python/etc/supervisord.conf         echo "port = 127.0.0.1:9001" | tee -a /opt/python/etc/supervisord.conf fi 

But you were not using it and this causes the below issue

[Instance: i-00c786a77c1f5ec11] Command failed on instance. Return code: 2 Output: (TRUNCATED)... ERROR: already shutting down error: , : file: /usr/lib64/python2.7/xmlrpclib.py line: 800 error: , : file: /usr/lib64/python2.7/xmlrpclib.py line: 800. Hook /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh failed.

Final Config

Below is the final config that you needed to use

files:   "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":     mode: "000755"     owner: root     group: root     content: |       #!/usr/bin/env bash        # Create required directories       sudo mkdir -p /var/log/celery/       sudo mkdir -p /var/run/celery/        # Create group called 'celery'       sudo groupadd -f celery       # add the user 'celery' if it doesn't exist and add it to the group with same name       id -u celery &>/dev/null || sudo useradd -g celery celery       # add permissions to the celery user for r+w to the folders just created       sudo chown -R celery:celery /var/log/celery/       sudo chown -R celery:celery /var/run/celery/        # Get django environment variables       celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`       celeryenv=${celeryenv%?}        # Create CELERY configuration script       celeryconf="[program:celeryd]       directory=/opt/python/current/app       ; Set full path to celery program if using virtualenv       command=/opt/python/run/venv/bin/celery worker -A celery_conf.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/%%n%%I.log\" --pidfile=\"/var/run/celery/%%n.pid\"        user=celery       numprocs=1       stdout_logfile=/var/log/celery-worker.log       stderr_logfile=/var/log/celery-worker.log       autostart=true       autorestart=true       startsecs=10        ; Need to wait for currently executing tasks to finish at shutdown.       ; Increase this if you have very long running tasks.       stopwaitsecs = 60        ; When resorting to send SIGKILL to the program to terminate it       ; send SIGKILL to its whole process group instead,       ; taking care of its children as well.       killasgroup=true        ; if rabbitmq is supervised, set its priority higher       ; so it starts first       priority=998        environment=$celeryenv"         # Create CELERY BEAT configuraiton script       celerybeatconf="[program:celerybeat]       ; Set full path to celery program if using virtualenv       command=/opt/python/run/venv/bin/celery beat -A celery_conf.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/celery-beat.log\" --pidfile=\"/var/run/celery/celery-beat.pid\"        directory=/opt/python/current/app       user=celery       numprocs=1       stdout_logfile=/var/log/celerybeat.log       stderr_logfile=/var/log/celerybeat.log       autostart=true       autorestart=true       startsecs=10        ; Need to wait for currently executing tasks to finish at shutdown.       ; Increase this if you have very long running tasks.       stopwaitsecs = 60        ; When resorting to send SIGKILL to the program to terminate it       ; send SIGKILL to its whole process group instead,       ; taking care of its children as well.       killasgroup=true        ; if rabbitmq is supervised, set its priority higher       ; so it starts first       priority=999        environment=$celeryenv"        # Create the celery supervisord conf script       echo "$celeryconf" | tee /opt/python/etc/celery.conf       echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf        # Add configuration script to supervisord conf (if not there already)       if ! grep -Fxq "celery.conf" /opt/python/etc/supervisord.conf         then           echo "[include]" | tee -a /opt/python/etc/supervisord.conf           echo "files: uwsgi.conf celery.conf celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf       fi        # Enable supervisor to listen for HTTP/XML-RPC requests.       # supervisorctl will use XML-RPC to communicate with supervisord over port 9001.       # Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection       if ! grep -Fxq "[inet_http_server]" /opt/python/etc/supervisord.conf         then           echo "[inet_http_server]" | tee -a /opt/python/etc/supervisord.conf           echo "port = 127.0.0.1:9001" | tee -a /opt/python/etc/supervisord.conf       fi        # Reread the supervisord config       supervisorctl -c /opt/python/etc/supervisord.conf reread        # Update supervisord in cache without restarting all services       supervisorctl -c /opt/python/etc/supervisord.conf update        # Start/Restart celeryd through supervisord       supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd       supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat   commands:   01_killotherbeats:     command: "ps auxww | grep 'celery beat' | awk '{print $2}' | sudo xargs kill -9 || true"     ignoreErrors: true   02_restartbeat:     command: "supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat"     leader_only: true 

Case Resolved!!



易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!