I am currently working on a scraper project which is much important to ensure EVERY request got properly handled, i.e., either to log an error or to save a successful result
At first, I thought it's more "logical" to raise exceptions in the parsing callback and process them all in errback, this could make the code more readable. But I tried only to find out errback can only trap errors in the downloader module, such as non-200 response statuses. If I raise a self-implemented ParseError in the callback, the spider just raises it and stops.
Yes, you are right - callback and errback are meant to be used only with downloader, as twisted is used for downloading a resource, and twisted uses deffereds - that's why callbacks are needed.
The only async part in scrapy usually is downloader, all the other parts working synchronously.
So, if you want to catch all non-downloader errors - do it yourself: