Scrapy XPath all the links on the page
问题 I am trying to collect all the URLs under a domain using Scrapy. I was trying to use the CrawlSpider to start from the homepage and crawl their web. For each page, I want to use Xpath to extract all the hrefs. And store the data in a format like key-value pair. Key: the current Url Value: all the links on this page. class MySpider(CrawlSpider): name = 'abc.com' allowed_domains = ['abc.com'] start_urls = ['http://www.abc.com'] rules = (Rule(SgmlLinkExtractor()), ) def parse_item(self, response