How to ignore already crawled URLs in Scrapy

家住魔仙堡 提交于 2019-12-13 20:05:39

问题


I have a crawler that looks something like this:

def parse:
      .......
      ........
      Yield(Request(url=nextUrl,callback=self.parse2))

def parse2:
      .......
      ........
      Yield(Request(url=nextUrl,callback=self.parse3))

def parse3:
      .......
      ........

I want to add a rule wherein I want to ignore if a URL has crawled while invoking function parse2, but keep the rule for parse3. I am still exploring the requests.seen file to see if I can manipulate that.


回答1:


check out dont_filter request parameter at http://doc.scrapy.org/en/latest/topics/request-response.html

dont_filter (boolean) – indicates that this request should not be filtered by the scheduler. This is used when you want to perform an identical request multiple times, to ignore the duplicates filter. Use it with care, or you will get into crawling loops. Default to False.




回答2:


You can set the rule in settings.py. Refer to the doc dupefilter-class

Default: 'scrapy.dupefilter.RFPDupeFilter'

The class used to detect and filter duplicate requests.

The default (RFPDupeFilter) filters based on request fingerprint using the scrapy.utils.request.request_fingerprint function.



来源:https://stackoverflow.com/questions/20415051/how-to-ignore-already-crawled-urls-in-scrapy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!