How do i create rules for a crawlspider using scrapy

╄→尐↘猪︶ㄣ 提交于 2019-12-03 20:07:56

Strictly speaking, this isn't answering the question since my code uses a BaseSpider instead of a CrawlSpider, but it does fulfil the OP's requirement so...

Points to note:

  1. Since all of the pagination links aren't available (you get the first nine and then the last two), I employed a somewhat hacktastic approach. Using the first response in the parse callback, I search for a link with a class of "next" (there's only one, so have a look to see which link it corresponds to), and then find its immediately preceding sibling. This gives me a handle on the total number of pages in the seinen category (currently 45).
  2. Next, we yield a Request object for the first page to be processed by the parse_item callback.
  3. Then, given that we have determined that there are 45 pages in total, we generate a whole series of Request objects for "./seinen/2.htm" all the way to "./seinen/45.htm".
  4. Since rating is a list and that its values are floats (which I should have realised on the basis that the condition is 4.5), the way to fix the error encountered is to loop through the list of ratings and cast each item to be a float.

Anyway, have a look at the following code and see if it makes sense. In theory you should be able to easily extend this code to scrape multiple categories, though that is left as an exercise for the OP. :)

from scrapy.spider import BaseSpider
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from tutorial.items import MangaItem
from urlparse import urlparse

class MangaHere(BaseSpider):
    name = "mangah2"
    start_urls = ["http://www.mangahere.com/seinen/"]
    allowed_domains = ["mangahere.com"]

    def parse(self, response):
        # get index depth ie the total number of pages for the category
        hxs = HtmlXPathSelector(response)
        next_link = hxs.select('//a[@class="next"]')
        index_depth = int(next_link.select('preceding-sibling::a[1]/text()').extract()[0])

        # create a request for the first page
        url = urlparse("http://www.mangahere.com/seinen/")
        yield Request(url.geturl(), callback=self.parse_item)

        # create a request for each subsequent page in the form "./seinen/x.htm"
        for x in xrange(2, index_depth):
            pageURL = "http://www.mangahere.com/seinen/%s.htm" % x
            url = urlparse(pageURL)
            yield Request(url.geturl(), callback=self.parse_item)

    def parse_item(self,response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//ul/li/div')
        items = []
        for site in sites:
            rating = site.select("p/span/text()").extract()
            for r in rating:
                if float(r) > 4.5:
                    item = MangaItem()
                    item["title"] = site.select("div/a/text()").extract()
                    item["desc"] = site.select("p[2]/text()").extract()
                    item["link"] = site.select("div/a/@href").extract()
                    item["rate"] = site.select("p/span/text()").extract()
                    items.append(item)
        return items
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!