Scrapy Follow & Scrape next Pages

自闭症网瘾萝莉.ら 提交于 2019-12-24 04:27:10

问题


I am having a problem where none of my scrapy spiders will crawl a website, just scrape one page and seize. I was under the impression that the rules member variable was responsible for this, but I can't get it to follow any links. I have been following the documentation from here: http://doc.scrapy.org/en/latest/topics/spiders.html#crawlspider

What could I be missing that is making none of my bots crawl?

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.selector import Selector

from Example.items import ExItem

class ExampleSpider(CrawlSpider):
    name = "example"
    allowed_domains = ["example.ac.uk"]
    start_urls = (
        'http://www.example.ac.uk',
    )

    rules = ( Rule (LinkExtractor(allow=("", ),),
                    callback="parse_items",  follow= True),
    )

回答1:


Replace your rule with this one :

rules = ( Rule(LinkExtractor(allow=('course-finder', ),restrict_xpaths=('//div[@class="pagination"]',)), callback='parse_items',follow=True), )


来源:https://stackoverflow.com/questions/28805663/scrapy-follow-scrape-next-pages

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!