Scrapy Limit Requests For Testing

前端 未结 2 2017
执笔经年
执笔经年 2020-12-21 03:36

I\'ve been searching the scrapy documentation for a way to limit the number of requests my spiders are allowed to make. During development I don\'t want to sit here and wait

相关标签:
2条回答
  • 2020-12-21 03:58

    You are looking for the CLOSESPIDER_PAGECOUNT setting of the CloseSpider extension:

    An integer which specifies the maximum number of responses to crawl. If the spider crawls more than that, the spider will be closed with the reason closespider_pagecount. If zero (or non set), spiders won’t be closed by number of crawled responses.

    0 讨论(0)
  • 2020-12-21 04:18

    As an addition to @alecxe's answer, it's worth noting that:

    Requests which are currently in the downloader queue (up to CONCURRENT_REQUESTS requests) are still processed.

    Although the above documentation is currently present only for CLOSESPIDER_ITEMCOUNT (and not CLOSESPIDER_PAGECOUNT), it should also appear there, because that's how it works.

    One can verify it with the following code:

    # scraper.py
    
    from scrapy import Spider
    from scrapy import Request
    
    class MySpider(Spider):
        name = 'MySpider'
        custom_settings = {'CLOSESPIDER_PAGECOUNT': 2}
    
        def start_requests(self):
            data_urls = [
              'https://www.example.com', 'https://www.example1.com', 'https://www.example2.com'
            ]
            for url in data_urls:
                yield Request(url=url, callback=lambda res: print(res))
    

    Assuming all 3 requests were yielded before two responses were already returned (this happened to me in 100% of times I tested it), the third request (to example2.com) would still get executed, so that running it:

    scrapy runspider scraper.py
    

    ... will output (note how although the spider went into a Closing spider stage, GET https://example2.com was still executed):

    INFO: Scrapy 2.3.0 started (bot: scrapybot)  
    [...]  
    INFO: Overridden settings:  
    {'CLOSESPIDER_PAGECOUNT': 2, 'SPIDER_LOADER_WARN_ONLY': True}
    [...]  
    INFO: Spider opened
    [...]
    DEBUG: Crawled (200) <GET https://www.example.com> (referer: None)
    <200 https://www.example.com>
    DEBUG: Crawled (200) <GET https://www.example1.com> (referer: None)
    INFO: Closing spider (closespider_pagecount)
    <200 https://www.example1.com>
    DEBUG: Redirecting (301) to <GET https://example2.com/> from <GET https://www.example2.com>
    INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 647,
     'downloader/request_count': 3,
     'downloader/request_method_count/GET': 3,
     'downloader/response_bytes': 3659,
     'downloader/response_count': 3,
     'downloader/response_status_count/200': 2,
     'downloader/response_status_count/301': 1,
     'elapsed_time_seconds': 11.052137,
     'finish_reason': 'closespider_pagecount',
     'finish_time': datetime.datetime(2020, 10, 4, 11, 28, 41, 801185),
     'log_count/DEBUG': 3,
     'log_count/INFO': 10,
     'response_received_count': 2,
     'scheduler/dequeued': 3,
     'scheduler/dequeued/memory': 3,
     'scheduler/enqueued': 4,
     'scheduler/enqueued/memory': 4,
     'start_time': datetime.datetime(2020, 10, 4, 11, 28, 30, 749048)}
    INFO: Spider closed (closespider_pagecount)
    

    This can be prevented by simply introducing an instance variable (e.g., limit):

    from scrapy import Spider
    from scrapy import Request
    
    class MySpider(Spider):
        name = 'MySpider'
        limit = 2
    
        def start_requests(self):
            data_urls = [
               'https://www.example.com', 'https://www.example1.com', 'https://www.example2.com'
            ]
            for url in data_urls:
                if self.limit > 0:
                    yield Request(url=url, callback=lambda res: print(res))
                    self.limit -= 1
    

    So that now only 2 requests are queued and executed. Output:

    [...]
    DEBUG: Crawled (200) <GET https://www.example.com> (referer: None)
    <200 https://www.example.com>
    DEBUG: Crawled (200) <GET https://www.example1.com> (referer: None)
    <200 https://www.example1.com>
    INFO: Closing spider (closespider_pagecount)
    INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 431,
     'downloader/request_count': 2,
     'downloader/request_method_count/GET': 2,
     'downloader/response_bytes': 3468,
     'downloader/response_count': 2,
     'downloader/response_status_count/200': 2,
     'elapsed_time_seconds': 5.827646,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2020, 10, 4, 11, 29, 41, 801185),
     'log_count/DEBUG': 2,
     'log_count/INFO': 10,
     'response_received_count': 2,
     'scheduler/dequeued': 2,
     'scheduler/dequeued/memory': 2,
     'scheduler/enqueued': 2,
     'scheduler/enqueued/memory': 2,
     'start_time': datetime.datetime(2020, 10, 4, 11, 29, 30, 749048)}
    INFO: Spider closed (finished)
    
    0 讨论(0)
提交回复
热议问题