How to force scrapy to crawl duplicate url?

最后都变了- 提交于 2019-11-28 21:19:10

You're probably looking for the dont_filter=True argument on Request(). See http://doc.scrapy.org/en/latest/topics/request-response.html#request-objects

A more elegant solution is to disable the duplicate filter altogether:

# settings.py
DUPEFILTER_CLASS = 'scrapy.dupefilters.BaseDupeFilter'

This way you don't have to clutter all your Request creation code with dont_filter=True. Another side effect: this only disables duplicate filtering and not any other filters like offsite filtering.

If you want to use this setting selectively for only one or some of multiple spiders in your project, you can set it via custom_settings in the spider implementation:

class MySpider(scrapy.Spider):
    name = 'myspider'

    custom_settings = {
        'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
    }
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!