Easiest way to run scrapy crawler so it doesn't block the script

前端 未结 2 602
暗喜
暗喜 2020-12-14 13:27

The official docs give many ways for running scrapy crawlers from code:

import scrapy
from scrapy.crawler import CrawlerProcess

class MySpider(         


        
2条回答
  •  南方客
    南方客 (楼主)
    2020-12-14 14:22

    Netimen's answer is correct. process.start() calls reactor.run(), which blocks the thread. Just that I don't think it is necessary to subclass billiard.Process. Although poorly documented, billiard.Process does have a set of APIs to call another function asynchronously without subclassing.

    from scrapy.crawler import CrawlerProcess
    from scrapy.utils.project import get_project_settings
    
    from billiard import Process
    
    crawler = CrawlerProcess(get_project_settings())
    process = Process(target=crawler.start, stop_after_crawl=False)
    
    
    def crawl(*args, **kwargs):
        crawler.crawl(*args, **kwargs)
        process.start()
    

    Note that if you don't have stop_after_crawl=False, you may run into ReactorNotRestartable exception when you run the crawler for more than twice.

提交回复
热议问题