How to run a celery worker on AWS Elastic Beanstalk?

后端 未结 2 664
长发绾君心
长发绾君心 2021-01-01 05:17

Versions:

  • Django 1.9.8
  • celery 3.1.23
  • django-celery 3.1.17
  • Python 2.7

I\'m trying to run my celery worker on AWS Elast

相关标签:
2条回答
  • 2021-01-01 05:34

    I forgot to add an answer after solving this. This is how i fixed it. I've created a new file "99-celery.config" in my .ebextensions folder. In this file, I've added this code and it works perfectly. (don't forget the change your project name in line number 16, mine is molocate_eb)

    files:
      "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
        mode: "000755"
        owner: root
        group: root
        content: |
          #!/usr/bin/env bash
    
          # Get django environment variables
          celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
          celeryenv=${celeryenv%?}
    
          # Create celery configuraiton script
          celeryconf="[program:celeryd]
          ; Set full path to celery program if using virtualenv
          command=/opt/python/current/app/molocate_eb/manage.py celery worker --loglevel=INFO
    
          directory=/opt/python/current/app
          user=nobody
          numprocs=1
          stdout_logfile=/var/log/celery-worker.log
          stderr_logfile=/var/log/celery-worker.log
          autostart=true
          autorestart=true
          startsecs=10
    
          ; Need to wait for currently executing tasks to finish at shutdown.
          ; Increase this if you have very long running tasks.
          stopwaitsecs = 600
    
          ; When resorting to send SIGKILL to the program to terminate it
          ; send SIGKILL to its whole process group instead,
          ; taking care of its children as well.
          killasgroup=true
    
          ; if rabbitmq is supervised, set its priority higher
          ; so it starts first
          priority=998
    
          environment=$celeryenv"
    
          # Create the celery supervisord conf script
          echo "$celeryconf" | tee /opt/python/etc/celery.conf
    
          # Add configuration script to supervisord conf (if not there already)
          if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
              then
              echo "[include]" | tee -a /opt/python/etc/supervisord.conf
              echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
          fi
    
          # Reread the supervisord config
          supervisorctl -c /opt/python/etc/supervisord.conf reread
    
          # Update supervisord in cache without restarting all services
          supervisorctl -c /opt/python/etc/supervisord.conf update
    
          # Start/Restart celeryd through supervisord
          supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
    

    Edit: In case of an supervisor error on AWS, just be sure that;

    • You're using Python 2 not Python 3 since supervisor doesn't work on Python 3.
    • Don't forget to add supervisor to your requirements.txt.
    • If it still gives error(happened to me once), just 'Rebuild Environment' and it'll probably work.
    0 讨论(0)
  • 2021-01-01 05:41

    you can use the supervisor to run celery. That will run the celery in demon process.

    [program:tornado-8002]
    directory: name of the director where django project lies
    command: command to run celery // python manage.py celery
    stderr_logfile = /var/log/supervisord/tornado-stderr.log
    stdout_logfile = /var/log/supervisord/tornado-stdout.log
    
    0 讨论(0)
提交回复
热议问题