问题
Hi have a basic spider that runs to fetch all links on a given domain. I want to make sure it persists its state so that it can resume from where it left. I have followed the given url http://doc.scrapy.org/en/latest/topics/jobs.html .But when i try it the first time it runs fine and i end it with Ctrl+C and when I try to resume it the crawl stops on the first url itself.
Below is the log when it ends:
2016-08-29 16:51:08 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 896,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 35320,
'downloader/response_count': 4,
'downloader/response_status_count/200': 4,
'dupefilter/filtered': 149,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 8, 29, 16, 51, 8, 837853),
'log_count/DEBUG': 28,
'log_count/INFO': 7,
'offsite/domains': 22,
'offsite/filtered': 23,
'request_depth_max': 1,
'response_received_count': 4,
'scheduler/dequeued': 2,
'scheduler/dequeued/disk': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/disk': 2,
'start_time': datetime.datetime(2016, 8, 29, 16, 51, 7, 821974)}
2016-08-29 16:51:08 [scrapy] INFO: Spider closed (finished)
Here is my spider:
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from Something.items import SomethingItem
class maxSpider(CrawlSpider):
name = 'something'
allowed_domains = ['thecheckeredflag.com', 'inautonews.com']
start_urls = ['http://www.thecheckeredflag.com/', 'http://www.inautonews.com/']
rules = (Rule(LinkExtractor(allow=()), callback='parse_obj', follow=True),)
def parse_obj(self,response):
for link in LinkExtractor(allow=self.allowed_domains,deny =() ).extract_links(response):
item = SomethingItem()
item['url'] = link.url
yield item
#print item
Scrapy version: Scrapy 1.1.2
Python version: 2.7
I am new to scrapy, if i need to post any more info please let me know.
回答1:
The reason this was happening was due to the spider process being killed abruptly.
The spider was not shutting down properly when I hit the Ctrl+C. Now, when the crawler shuts down properly the first time, it resumes properly too.
So basically, make sure that you see the crawler ends/shuts down properly for it to resume.
来源:https://stackoverflow.com/questions/39211490/scrapy-spider-does-not-store-state-persistent-state