Re-crawling websites fast

北战南征 提交于 2019-12-05 07:04:01

问题


I am developing a system that has to track content of few portals and check changes every night (for example download and index new sites that have been added during the day). Content of this portals will be indexed for searching. The problem is in re-crawling this portals - first crawling of portal takes very long (examples of portals: www.onet.pl, www.bankier.pl, www.gazeta.pl ) and I want to re-crawl it faster (as fast as it is possible) for example by checking date of modification but I have used wget to download www.bankier.pl but in response it complains that there is no last-modification header. Is there any way to re-crawl so many sites? I have also tried using Nutch but script for re-clawing seems not work properly - or it also depends on this headers (last-modified). Maybe there is a tool, crawler (like Nutch or something) that can update already downloaded sites by adding new one??

Best regards, Wojtek


回答1:


I recommend using curl to fetch only the head and check if the Last-Modified header has changed.

Example:

 curl --head www.bankier.pl



回答2:


For Nutch, I have written a blog post on how to re-crawl with Nutch. Basically, you should set a low value for the db.fetch.interval.default setting. On the next fetch of a url, Nutch will use the last fetch time as the value for the If-Modified-Since HTTP header.



来源:https://stackoverflow.com/questions/4618530/re-crawling-websites-fast

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!