I\'m trying to run example from Celery documentation.
I run: celeryd --loglevel=INFO
/usr/local/lib/python2.7/dist-packages/celery/loade
If you use autodiscover_tasks, make sure that your functions to be registered stay in the tasks.py, not any other file. Or celery can not find the functions you want to register.
Use app.register_task will also do the job, but seems a little naive.
Please refer to this official specification of autodiscover_tasks.
def autodiscover_tasks(self, packages=None, related_name='tasks', force=False):
"""Auto-discover task modules.
Searches a list of packages for a "tasks.py" module (or use
related_name argument).
If the name is empty, this will be delegated to fix-ups (e.g., Django).
For example if you have a directory layout like this:
.. code-block:: text
foo/__init__.py
tasks.py
models.py
bar/__init__.py
tasks.py
models.py
baz/__init__.py
models.py
Then calling ``app.autodiscover_tasks(['foo', bar', 'baz'])`` will
result in the modules ``foo.tasks`` and ``bar.tasks`` being imported.
Arguments:
packages (List[str]): List of packages to search.
This argument may also be a callable, in which case the
value returned is used (for lazy evaluation).
related_name (str): The name of the module to find. Defaults
to "tasks": meaning "look for 'module.tasks' for every
module in ``packages``."
force (bool): By default this call is lazy so that the actual
auto-discovery won't happen until an application imports
the default modules. Forcing will cause the auto-discovery
to happen immediately.
"""
I think you need to restart the worker server. I meet the same problem and solve it by restarting.
My 2 cents
I was getting this in a docker image using alpine. The django settings referenced /dev/log for logging to syslog. The django app and celery worker were both based on the same image. The entrypoint of the django app image was launching syslogd on start, but the one for the celery worker was not. This was causing things like ./manage.py shell to fail because there wouldn't be any /dev/log. The celery worker was not failing. Instead, it was silently just ignoring the rest of the app launch, which included loading shared_task entries from applications in the django project
I did not have any issue with Django. But encountered this when I was using Flask. The solution was setting the config option.
celery worker -A app.celery --loglevel=DEBUG --config=settings
while with Django, I just had:
python manage.py celery worker -c 2 --loglevel=info
You can see the current list of registered tasks in the celery.registry.TaskRegistry class. Could be that your celeryconfig (in the current directory) is not in PYTHONPATH so celery can't find it and falls back to defaults. Simply specify it explicitly when starting celery.
celeryd --loglevel=INFO --settings=celeryconfig
You can also set --loglevel=DEBUG and you should probably see the problem immediately.
Try importing the Celery task in a Python Shell - Celery might silently be failing to register your tasks because of a bad import statement.
I had an ImportError exception in my tasks.py file that was causing Celery to not register the tasks in the module. All other module tasks were registered correctly.
This error wasn't evident until I tried importing the Celery task within a Python Shell. I fixed the bad import statement and then the tasks were successfully registered.