How do i create rules for a crawlspider using scrapy

前提是你 提交于 2020-01-01 06:42:09

问题


from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from manga.items import MangaItem

class MangaHere(BaseSpider):
    name = "mangah"
    allowed_domains = ["mangahere.com"]
    start_urls = ["http://www.mangahere.com/seinen/"]

    def parse(self,response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//ul/li/div')
        items = []
        for site in sites:
            rating = site.select("p/span/text()").extract()
            if rating > 4.5:
                item = MangaItem()
                item["title"] = site.select("div/a/text()").extract()
                item["desc"] = site.select("p[2]/text()").extract()
                item["link"] = site.select("div/a/@href").extract()
                item["rate"] = site.select("p/span/text()").extract()
                items.append(item)

        return items

My goal is to crawl www.mangahere.com/seinen or anything on that site. I want to go through every page and collect books that are greater than a 4.5 rating. I started out as a basespider and tried copying and reading the scrapy tutorial but it pretty much went in over my head. I am here to ask what do i do to create my rules, and how. I also cant seem to get my condition to work, the code either only returns the very first item and stops regardless of condition, or grabs everything, again regardless of condition. I know its probably pretty messed up code but I am still struggling to learn. Feel free to touch up the code or offer other advice


回答1:


Strictly speaking, this isn't answering the question since my code uses a BaseSpider instead of a CrawlSpider, but it does fulfil the OP's requirement so...

Points to note:

  1. Since all of the pagination links aren't available (you get the first nine and then the last two), I employed a somewhat hacktastic approach. Using the first response in the parse callback, I search for a link with a class of "next" (there's only one, so have a look to see which link it corresponds to), and then find its immediately preceding sibling. This gives me a handle on the total number of pages in the seinen category (currently 45).
  2. Next, we yield a Request object for the first page to be processed by the parse_item callback.
  3. Then, given that we have determined that there are 45 pages in total, we generate a whole series of Request objects for "./seinen/2.htm" all the way to "./seinen/45.htm".
  4. Since rating is a list and that its values are floats (which I should have realised on the basis that the condition is 4.5), the way to fix the error encountered is to loop through the list of ratings and cast each item to be a float.

Anyway, have a look at the following code and see if it makes sense. In theory you should be able to easily extend this code to scrape multiple categories, though that is left as an exercise for the OP. :)

from scrapy.spider import BaseSpider
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from tutorial.items import MangaItem
from urlparse import urlparse

class MangaHere(BaseSpider):
    name = "mangah2"
    start_urls = ["http://www.mangahere.com/seinen/"]
    allowed_domains = ["mangahere.com"]

    def parse(self, response):
        # get index depth ie the total number of pages for the category
        hxs = HtmlXPathSelector(response)
        next_link = hxs.select('//a[@class="next"]')
        index_depth = int(next_link.select('preceding-sibling::a[1]/text()').extract()[0])

        # create a request for the first page
        url = urlparse("http://www.mangahere.com/seinen/")
        yield Request(url.geturl(), callback=self.parse_item)

        # create a request for each subsequent page in the form "./seinen/x.htm"
        for x in xrange(2, index_depth):
            pageURL = "http://www.mangahere.com/seinen/%s.htm" % x
            url = urlparse(pageURL)
            yield Request(url.geturl(), callback=self.parse_item)

    def parse_item(self,response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//ul/li/div')
        items = []
        for site in sites:
            rating = site.select("p/span/text()").extract()
            for r in rating:
                if float(r) > 4.5:
                    item = MangaItem()
                    item["title"] = site.select("div/a/text()").extract()
                    item["desc"] = site.select("p[2]/text()").extract()
                    item["link"] = site.select("div/a/@href").extract()
                    item["rate"] = site.select("p/span/text()").extract()
                    items.append(item)
        return items


来源:https://stackoverflow.com/questions/14417512/how-do-i-create-rules-for-a-crawlspider-using-scrapy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!