The script (below) from this tutorial contains two start_urls.
start_urls
from scrapy.spider import Spider from scrapy.selector import Selector from dirb
start_urls contain those links from which the spider start crawling. If you want crawl recursively you should use crawlspider and define rules for that. http://doc.scrapy.org/en/latest/topics/spiders.html look there for example.