Scrapy: how to debug scrapy lost requests

丶灬走出姿态 提交于 2019-12-07 02:04:06

问题


I have a scrapy spider, but it doesn't return requests sometimes.

I've found that by adding log messages before yielding request and after getting response.

Spider has iterating over a pages and parsing link for item scrapping on each page.

Here is a part of code

SampleSpider(BaseSpider):
    ....
    def parse_page(self, response):
        ...
        request = Request(target_link, callback=self.parse_item_general)
        request.meta['date_updated'] = date_updated
        self.log('parse_item_general_send {url}'.format(url=request.url), level=log.INFO)
        yield request

    def parse_item_general(self, response):
        self.log('parse_item_general_recv {url}'.format(url=response.url), level=log.INFO)
        sel = Selector(response)
        ...

I've compared number of each log messages and "parse_item_general_send" is more than "parse_item_general_recv"

There's no 400 or 500 errors in final statistics, all responses status code is only 200. It looks like requests just disappears.

I've also added these parameters to minimize possible errors:

CONCURRENT_REQUESTS_PER_DOMAIN = 1
DOWNLOAD_DELAY = 0.8

Because of asynchronous nature of twisted, I don't know how to debug this bug. I've found a similar question: Python Scrapy not always downloading data from website, but it hasn't any response


回答1:


On, the same note as Rho, you can add the setting

DUPEFILTER_CLASS = 'scrapy.dupefilter.BaseDupeFilter' 

to your "settings.py" which will remove the url caching. This is a tricky issue since there isn't a debug string in the scrapy logs that tells you when it uses a cached result.



来源:https://stackoverflow.com/questions/20723371/scrapy-how-to-debug-scrapy-lost-requests

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!