How to recursively crawl subpages with Scrapy

一曲冷凌霜 提交于 2019-12-06 05:58:49

Not familiar with ElasticSearch but I'd build a scraper like this:

class randomSpider(scrapy.Spider):
    name = "helpme"
    allowed_domains = ["example.com"]
    start_urls = ['http://example.com/categories',]

    def parse(self, response):
        for i in response.css('div.CategoryTreeSection'):
            subcategory = i.css('Put your selector here') # This is where you select the subcategory url
            req = scrapy.Request(subcategory, callback=self.parse_subcategory)
            req.meta['category'] = i.css('a::text').extract_first()
            yield req

    def parse_subcategory(self, response):
        yield {
            'category' : response.meta.get('category')
            'subcategory' : response.css('Put your selector here') # Select the name of the subcategory
            'subcategorydata' : response.css('Put your selector here') # Select the data of the subcategory
        }

You collect the subcategory URL and send a request. The response of this request will be opened in parse_subcategory. While sending this request, we add the category name in the meta data.

In the parse_subcategory function you get the category name from the meta data and collect the subcategory data from the webpage.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!