For development purposes, I would like to stop all scrapy crawling activity as soon a first exception (in a spider or a pipeline) occurs.
Any advice?
its purely depends on your business logic. but this will work for you
crawler.engine.close_spider(self, 'log message')
Suggested Reading
and the worst solution is
import sys sys.exit("SHUT DOWN EVERYTHING!")