Celery Received unregistered task of type (run example)

匿名 (未验证) 提交于 2019-12-03 01:47:02

问题:

I'm trying to run example from Celery documentation.

I run: celeryd --loglevel=INFO

/usr/local/lib/python2.7/dist-packages/celery/loaders/default.py:64: NotConfigured: No 'celeryconfig' module found! Please make sure it exists and is available to Python.   "is available to Python." % (configname, ))) [2012-03-19 04:26:34,899: WARNING/MainProcess]     -------------- celery@ubuntu v2.5.1 ---- **** ----- --- * ***  * -- [Configuration] -- * - **** ---   . broker:      amqp://guest@localhost:5672// - ** ----------   . loader:      celery.loaders.default.Loader - ** ----------   . logfile:     [stderr]@INFO - ** ----------   . concurrency: 4 - ** ----------   . events:      OFF - *** --- * ---   . beat:        OFF -- ******* ---- --- ***** ----- [Queues]  --------------   . celery:      exchange:celery (direct) binding:celery 

tasks.py:

# -*- coding: utf-8 -*- from celery.task import task  @task def add(x, y):     return x + y 

run_task.py:

# -*- coding: utf-8 -*- from tasks import add result = add.delay(4, 4) print (result) print (result.ready()) print (result.get()) 

In same folder celeryconfig.py:

CELERY_IMPORTS = ("tasks", ) CELERY_RESULT_BACKEND = "amqp" BROKER_URL = "amqp://guest:guest@localhost:5672//" CELERY_TASK_RESULT_EXPIRES = 300 

When I run "run_task.py":

on python console

eb503f77-b5fc-44e2-ac0b-91ce6ddbf153 False 

errors on celeryd server

[2012-03-19 04:34:14,913: ERROR/MainProcess] Received unregistered task of type 'tasks.add'. The message has been ignored and discarded.  Did you remember to import the module containing this task? Or maybe you are using relative imports? Please see http://bit.ly/gLye1c for more information.  The full contents of the message body was: {'retries': 0, 'task': 'tasks.add', 'utc': False, 'args': (4, 4), 'expires': None, 'eta': None, 'kwargs': {}, 'id': '841bc21f-8124-436b-92f1-e3b62cafdfe7'}  Traceback (most recent call last):   File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 444, in receive_message     self.strategies[name](message, body, message.ack_log_error) KeyError: 'tasks.add' 

Please explain what's the problem.

回答1:

You can see the current list of registered tasks in the celery.registry.TaskRegistry class. Could be that your celeryconfig (in the current directory) is not in PYTHONPATH so celery can't find it and falls back to defaults. Simply specify it explicitly when starting celery.

celeryd --loglevel=INFO --settings=celeryconfig 

You can also set --loglevel=DEBUG and you should probably see the problem immediately.



回答2:

I had the same problem: The reason of "Received unregistered task of type.." was that celeryd service didn't find and register the tasks on service start (btw their list is visible when you start ./manage.py celeryd --loglevel=info ).

These tasks should be declared in CELERY_IMPORTS = ("tasks", ) in settings file.
If you have a special celery_settings.py file it has to be declared on celeryd service start as --settings=celery_settings.py as digivampire wrote.



回答3:

I think you need to restart the worker server. I meet the same problem and solve it by restarting.



回答4:

I also had the same problem; I added

CELERY_IMPORTS=("mytasks") 

in my celeryconfig.py file to solve it.



回答5:

Whether you use CELERY_IMPORTS or autodiscover_tasks, the important point is the tasks are able to be found and the name of the tasks registered in Celery should match the names the workers try to fetch.

When you launch the Celery, say celery worker -A project --loglevel=DEBUG, you should see the name of the tasks. For example, if I have a debug_task task in my celery.py.

[tasks] . project.celery.debug_task . celery.backend_cleanup . celery.chain . celery.chord . celery.chord_unlock . celery.chunks . celery.group . celery.map . celery.starmap 

If you can't see your tasks in the list, please check your celery configuration imports the tasks correctly, either in --setting, --config, celeryconfig or config_from_object.

If you are using celery beat, make sure the task name, task, you use in CELERYBEAT_SCHEDULE matches the name in the celery task list.



回答6:

For me this error was solved by ensuring the app containing the tasks was included under django's INSTALLED_APPS setting.



回答7:

Using --settings did not work for me. I had to use the following to get it all to work:

celery --config=celeryconfig --loglevel=INFO 

Here is the celeryconfig file that has the CELERY_IMPORTS added:

# Celery configuration file BROKER_URL = 'amqp://' CELERY_RESULT_BACKEND = 'amqp://'  CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_TIMEZONE = 'America/Los_Angeles' CELERY_ENABLE_UTC = True  CELERY_IMPORTS = ("tasks",) 

My setup was a little bit more tricky because I'm using supervisor to launch celery as a daemon.



回答8:

I had the same problem running tasks from Celery Beat. Celery doesn't like relative imports so in my celeryconfig.py, I had to explicitly set the full package name:

app.conf.beat_schedule = {    'add-every-30-seconds': {         'task': 'full.path.to.add',         'schedule': 30.0,         'args': (16, 16)     }, } 


回答9:

I had this problem mysteriously crop up when I added some signal handling to my django app. In doing so I converted the app to use an AppConfig, meaning that instead of simply reading as 'booking' in INSTALLED_APPS, it read 'booking.app.BookingConfig'.

Celery doesn't understand what that means, so I added, INSTALLED_APPS_WITH_APPCONFIGS = ('booking',) to my django settings, and modified my celery.py from

app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) 

to

app.autodiscover_tasks(     lambda: settings.INSTALLED_APPS + settings.INSTALLED_APPS_WITH_APPCONFIGS ) 


回答10:

If you are running into this kind of error, there are a number of possible causes but the solution I found was that my celeryd config file in /etc/defaults/celeryd was configured for standard use, not for my specific django project. As soon as I converted it to the format specified in the celery docs, all was well.



回答11:

The solution for me to add this line to /etc/default/celeryd

CELERYD_OPTS="-A tasks" 

Because when I run these commands:

celery worker --loglevel=INFO celery worker -A tasks --loglevel=INFO 

Only the latter command was showing task names at all.

I have also tried adding CELERY_APP line /etc/default/celeryd but that didn't worked either.

CELERY_APP="tasks" 


回答12:

I encountered this problem as well, but it is not quite the same, so just FYI. Recent upgrades causes this error message due to this decorator syntax.

ERROR/MainProcess] Received unregistered task of type 'my_server_check'.

@task('my_server_check')

Had to be change to just

@task()

No clue why.



回答13:

I had the issue with PeriodicTask classes in django-celery, while their names showed up fine when starting the celery worker every execution triggered:

KeyError: u'my_app.tasks.run'

My task was a class named 'CleanUp', not just a method called 'run'.

When I checked table 'djcelery_periodictask' I saw outdated entries and deleting them fixed the issue.



回答14:

Just to add my two cents for my case with this error...

My path is /vagrant/devops/test with app.py and __init__.py in it.

When I run cd /vagrant/devops/ && celery worker -A test.app.celery --loglevel=info I am getting this error.

But when I run it like cd /vagrant/devops/test && celery worker -A app.celery --loglevel=info everything is OK.



回答15:

I've found that one of our programmers added the following line to one of the imports:

os.chdir() 

This caused the Celery worker to change its working directory from the projects' default working directory (where it could find the tasks) to a different directory (where it couldn't find the tasks).

After removing this line of code, all tasks were found and registered.



回答16:

My celery version is 4.0.2 (latentcall). I am using autodiscover_tasks(). I had the same problem, because I didn't activate my python environment.

Hope this may help.



回答17:

Celery doesn't support relative imports so in my celeryconfig.py, you need absolute import.

CELERYBEAT_SCHEDULE = {         'add_num': {             'task': 'app.tasks.add_num.add_nums',             'schedule': timedelta(seconds=10),             'args': (1, 2)         } } 


回答18:

An additional item to a really useful list.

I have found Celery unforgiving in relation to errors in tasks (or at least I haven't been able to trace the appropriate log entries) and it doesn't register them. I have had a number of issues with running Celery as a service, which have been predominantly permissions related.

The latest related to permissions writing to a log file. I had no issues in development or running celery at the command line, but the service reported the task as unregistered.

I needed to change the log folder permissions to enable the service to write to it.



回答19:

My 2 cents

I was getting this in a docker image using alpine. The django settings referenced /dev/log for logging to syslog. The django app and celery worker were both based on the same image. The entrypoint of the django app image was launching syslogd on start, but the one for the celery worker was not. This was causing things like ./manage.py shell to fail because there wouldn't be any /dev/log. The celery worker was not failing. Instead, it was silently just ignoring the rest of the app launch, which included loading shared_task entries from applications in the django project



回答20:

if you are using Windows and celery 4.0.1, you need to add --pool=solo option while running celery

celery -A project_name worker -l DEBUG --pool=solo 


标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!