CrawlerRunner not crawl pages with Crochet

。_饼干妹妹 提交于 2021-02-17 07:04:07

问题


I am trying to launch a Scrapy from script with CrawlerRunner() to launch in AWS Lambda.

I watched in Stackoverflow the solution with crochet library, but it doesn´t work for me.

Links: StackOverflow 1 StackOverflow 2

This is the code:

import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings
from scrapy.utils.log import configure_logging

# From response in Stackoverflow: https://stackoverflow.com/questions/41495052/scrapy-reactor-not-restartable
from crochet import setup
setup()

class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]

        print ('Scrapped page n', page)


    def closed(self, reason):
        print ('Closed Spider: ', reason)


def run_spider():

    configure_logging({'LOG_FORMAT': '%(levelname)s: %(message)s'})

    crawler = CrawlerRunner(get_project_settings())
    crawler.crawl(QuotesSpider)        


run_spider()

and when I execute the script, it returned this log:

INFO: Overridden settings: {}
2019-01-28 16:49:52 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2019-01-28 16:49:52 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-01-28 16:49:52 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-01-28 16:49:52 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-01-28 16:49:52 [scrapy.core.engine] INFO: Spider opened
2019-01-28 16:49:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-28 16:49:52 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023

Why the crawler not crawl the Spider? I run with Mac and Python 3.7.1.

Any help?? I appreciate so much your support.


回答1:


I'm not sure if you already resolved this problem, but anyway.

Without using Crochet, it might be able to write a scraper using Scrapy's CrawlerRunner as follows.

import scrapy
from scrapy.crawler import CrawlerRunner
from twisted.internet import reactor

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url)
    def parse(self, response):
        page = response.url.split('/')[-2]
        print ("Scrapped page n", page)
    def closed(self, reason):
        print ("Closed Spider: ", reason)

def run_spider():
    crawler = CrawlerRunner()
    d = crawler.crawl(QuotesSpider)
    d.addCallback(lambda _: reactor.stop())
    reactor.run()

if __name__ == '__main__':
    run_spider()

If you want to use Crochet with Scrapy's CrawlerRunner for any reasons, then

  1. wrap your function run_spider with Crochet's decorator @wait_for
  2. and return a deferred from the decorated function run_spider.

Try this!

from crochet import setup, wait_for
import scrapy
from scrapy.crawler import CrawlerRunner

setup()

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url)
    def parse(self, response):
        page = response.url.split('/')[-2]
        print ("Scrapped page n", page)
    def closed(self, reason):
        print ("Closed Spider: ", reason)

@wait_for(10)
def run_spider():
    crawler = CrawlerRunner()
    d = crawler.crawl(QuotesSpider)
    return d

if __name__ == '__main__':
    run_spider()



回答2:


I run your code and can see spider is running, but can't see any content printing in parse function.

i added

==your code end===
time.sleep(10) 

in the end of your code, then i can see the print out of parse function.

So it might be the reason the the main process end before enter into the parse



来源:https://stackoverflow.com/questions/54409036/crawlerrunner-not-crawl-pages-with-crochet

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!