After several readings to Scrapy docs I\'m still not catching the diferrence between using CrawlSpider rules and implementing my own link extraction mechanism on the callbac
If you want selective crawling, like fetching "Next" links for pagination etc., it's better to write your own crawler. But for general crawling, you should use crawlspider and filter out the links that you don't need to follow using Rules & process_links function.
Take a look at the crawlspider code in \scrapy\contrib\spiders\crawl.py , it isn't too complicated.