I\'ve been using Scrapy
web-scraping framework pretty extensively, but, recently I\'ve discovered that there is another framework/system called pyspider, which,
pyspider and Scrapy have the same purpose, web scraping, but a different view about doing that.
spider should never stop till WWW dead. (information is changing, data is updating in websites, spider should have the ability and responsibility to scrape latest data. That's why pyspider has URL database, powerful scheduler, @every
, age
, etc..)
pyspider is a service more than a framework. (Components are running in isolated process, lite - all
version is running as service too, you needn't have a Python environment but a browser, everything about fetch or schedule is controlled by script via API not startup parameters or global configs, resources/projects is managed by pyspider, etc...)
pyspider is a spider system. (Any components can been replaced, even developed in C/C++/Java or any language, for better performance or larger capacity)
and
on_start
vs start_url
download_delay
return json
vs class Item
Pipeline
set
In fact, I have not referred much from Scrapy. pyspider is really different from Scrapy.
But, why not try it yourself? pyspider is also fast, has easy-to-use API and you can try it without install.
Since I use both scrapy and pyspider, I would like to suggest the following:
If the website is really small / simple, try pyspider first since it has almost everything you need
However, if you tried pyspider and found it can't fit your needs, it's time to use scrapy. - migrate on_start to start_request - migrate index_page to parse - migrate detail_age to detail_age - change self.crawl to response.follow
Then you are almost done. Now you can play with scrapy's advanced features like middleware, items, pipline etc.