For development purposes, I would like to stop all scrapy crawling activity as soon a first exception (in a spider or a pipeline) occurs.
Any advice?
In spider, you can just throw CloseSpider exception.
def parse_page(self, response): if 'Bandwidth exceeded' in response.body: raise CloseSpider('bandwidth_exceeded')
For others (middlewares, pipeline, etc), you can manually call close_spider as akhter mentioned.