Scrapy cmdline.execute stops script

匿名 (未验证) 提交于 2019-12-03 00:56:02

问题:

When I call

cmdline.execute("scrapy crawl website".split()) print "Hello World" 

it stops the script after cmdline.execute, and doesn't run the rest of the script and print "Hello World". How do I fix this?

回答1:

By taking a look at the execute function in Scrapy's cmdline.py, you'll see the final line is:

sys.exit(cmd.exitcode) 

There really is no way around this sys.exit call if you call the execute function directly, at least not without changing it. Monkey-patching is one option, albeit not a good one! A better option is to avoid calling the execute function entirely, and instead use the custom function below:

from twisted.internet import reactor  from scrapy import log, signals from scrapy.crawler import Crawler as ScrapyCrawler from scrapy.settings import Settings from scrapy.xlib.pydispatch import dispatcher from scrapy.utils.project import get_project_settings  def scrapy_crawl(name):      def stop_reactor():         reactor.stop()      dispatcher.connect(stop_reactor, signal=signals.spider_closed)     scrapy_settings = get_project_settings()     crawler = ScrapyCrawler(scrapy_settings)     crawler.configure()     spider = crawler.spiders.create(name)     crawler.crawl(spider)     crawler.start()     log.start()     reactor.run() 

And you can call it like this:

scrapy_crawl("your_crawler_name") 


回答2:

One can run subprocess.call. For example on Windows with powershell:

import subprocess

subprocess.call([r'C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe', '-ExecutionPolicy', 'Unrestricted', 'scrapy crawl website -o items.json -t json'])



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!