Scrapy spider not found error

冷暖自知 提交于 2020-02-17 08:16:32

问题


This is Windows 7 with python 2.7

I have a scrapy project in a directory called caps (this is where scrapy.cfg is)

My spider is located in caps\caps\spiders\campSpider.py

I cd into the scrapy project and try to run

scrapy crawl campSpider -o items.json -t json

I get an error that the spider can't be found. The class name is campSpider

...
    spider = self.crawler.spiders.create(spname, **opts.spargs)
  File "c:\Python27\lib\site-packages\scrapy-0.14.0.2841-py2.7-win32.egg\scrapy\spidermanager.py", l
ine 43, in create
    raise KeyError("Spider not found: %s" % spider_name)
KeyError: 'Spider not found: campSpider'

Am I missing some configuration item?


回答1:


Make sure you have set the "name" property of the spider. Example:

class campSpider(BaseSpider):
   name = 'campSpider'

Without the name property, the scrapy manager will not be able to find your spider.




回答2:


Also make sure that your project is not called scrapy! I made that mistake and renaming it fixed the problem.




回答3:


Have you set up the SPIDER_MODULES setting?

SPIDER_MODULES

Default: []

A list of modules where Scrapy will look for spiders.

Example:

SPIDER_MODULES = ['mybot.spiders_prod', 'mybot.spiders_dev']




回答4:


Try running scrapy list on the command line. If there is any error on the spider it will detect it.

In my case, I was bluntly copy code from another project and forget to change the project name from the spider module import




回答5:


You have to give a name to your spider.

However, BaseSpider is deprecated, use Spider instead.

from scrapy.spiders import Spider
class campSpider(Spider):
   name = 'campSpider'

The project should have been created by the startproject command:

scrapy startproject project_name

Which gives you the following directory tree:

project_name/
    scrapy.cfg            # deploy configuration file

    project_name/             # project's Python module, you'll import your code from here
        __init__.py

        items.py          # project items file

        pipelines.py      # project pipelines file

        settings.py       # project settings file

        spiders/          # a directory where you'll later put your spiders
            __init__.py
            ...

Make sure that settings.py has the definition of your spider module. eg:

BOT_NAME = 'bot_name' # Usually equals to your project_name 

SPIDER_MODULES = ['project_name.spiders']
NEWSPIDER_MODULE = 'project_name.spiders'

You should have no problems to run your spider locally or on ScrappingHub.




回答6:


For anyone who might have the same problem, not only you need to set the name of the spider and check for SPIDER_MODULES and NEWSPIDER_MODULE in your scrapy settings, if you are running a scrapyd service, you also need to restart in order to apply any change you have made




回答7:


make sure that your spider file is saved in your spider directory. the Crawler looks for the spider name in the spider directory




回答8:


Name attribute in CrawlSpider class defines the spider name and this name is used in command line for calling the spider to work.

import json

from scrapy import Spider
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.linkextractor import LinkExtractor

class NameSpider(CrawlSpider):
    name = 'name of spider'
    allowed_domains = ['allowed domains of web portal to be scrapped']
    start_urls = ['start url of of web portal to be scrapped']

    custom_settings = {
        'DOWNLOAD_DELAY': 1,
        'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
    }

    product_css = ['.main-menu']
    rules = [
        Rule(LinkExtractor(restrict_css=product_css), callback='parse'),
    ]

    def parse(self, response):
        //implementation of business logic



回答9:


I also had this problem,and it turned out to be rather small. Be sure your class inherits from scrapy.Spider

my_class(scrapy.Spider):



回答10:


Check indentation too, the class for my spider was indented one tab. Somehow that makes the class invalid or something.




回答11:


without project use runspider and fileName with project use crawl and name sample : C/user> scrapy runspider myFile.py




回答12:


In my case, i set 'LOG_STDOUT=True', and scrapyd can not return the results to json response when you are looking for your spiders with '/listspiders.json'. And instead of that, the results are being printed to the log files you set at scrapyd's default_scrapyd.conf file. So, I changed the settings as this, and it worked well.

LOG_STDOUT = False



回答13:


Ahh Yes, you should enter the value of your 'name variable value'.

I.e.

import scrapy

class QuoteSpider(scrapy.Spider):
    name = 'quotes'
    start_urls = [
        'http://quotes.toscrape.com/'
    ]

    def parse(self, response):
        title = response.css('title').extract()
        yield {'titleText' : title}

So in this case, the name = 'quotes'. Then in your command line you enter: 'scrapy crawl quotes'

That was my problem.




回答14:


If you are following the tutorial from https://docs.scrapy.org/en/latest/intro/tutorial.html

Then do something like:

$ sudo apt install python-pip
$ pip install Scrapy
(logout, login)
$ cd
$ scrapy startproject tutorial
$ vi ~/tutorial/tutorial/spiders/quotes_spider.py
$ cd ~/tutorial/tutorial
$ scrapy crawl quotes

The error happens if you try to create the spiders directory yourself under ~/tutorial



来源:https://stackoverflow.com/questions/9876793/scrapy-spider-not-found-error

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!