How to best develop web crawlers

北慕城南 提交于 2019-12-21 05:47:07

问题


I am used to create some crawlers to compile information and as I come to a website I need the info I start a new crawler specific for that site, using shell scripts most of the time and sometime PHP.

The way I do is with a simple for to iterate for the page list, a wget do download it and sed, tr, awk or other utilities to clean the page and grab the specific info I need.

All the process takes some time depending on the site and more to download all pages. And I often steps into an AJAX site that complicates everything

I was wondering if there is better ways to do that, faster ways or even some applications or languages to help such work.


回答1:


Using regular expressions for parsing content is a bad idea that has been covered in questions here countless times.

You should be parsing the document into a DOM tree and then you can pull out any hyperlinks, stylesheets, script files, images or other external links that you want and traverse them accordingly.

Many scripting languages have packages for getting Web pages (eg curl for PHP) and for parsing HTML (eg Beautiful Soup for Python). Go that route instead of the hackky solution of regular expression matching.




回答2:


If you use python, Scrapy is great is easy to use.



来源:https://stackoverflow.com/questions/522975/how-to-best-develop-web-crawlers

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!