How to force scrapy to crawl duplicate url?

后端 未结 2 1514
遥遥无期
遥遥无期 2020-12-14 17:00

I am learning Scrapy a web crawling framework.
by default it does not crawl duplicate urls or urls which scrapy have already crawled.

How to make Scrapy to cra

2条回答
  •  借酒劲吻你
    2020-12-14 17:26

    You're probably looking for the dont_filter=True argument on Request(). See http://doc.scrapy.org/en/latest/topics/request-response.html#request-objects

提交回复
热议问题