Advice with crawling web site content

。_饼干妹妹 提交于 2019-12-08 05:33:44

问题


I was trying to crawl some of website content, using jsoup and java combination. Save the relevant details to my database and doing the same activity daily.

But here is the deal, when I open the website in browser I get rendered html (with all element tags out there). The javascript part when I test it, it works just fine (the one which I'm supposed to use to extract the correct data).

But when I do a parse/get with jsoup(from Java class), only the initial website is downloaded for parsing. Meaning there are some dynamic parts of a website and I want to get that data but since they're rendered post get, asynchronously on the website I'm unable to capture it with jsoup.

Does anybody knows a way around this? Am I using the right toolset? more experienced people, I bid your advice.


回答1:


You need to check before if the website you're crawling demands some of this list to show all contents:

  • Authentication with Login/Password
  • Some sort of session validation on HTTP headers
  • Cookies
  • Some sort of time delay to load all the contents (sites profuse on Javascript libraries, CSS and asyncronous data may need of this).
  • An specific User-Agent browser
  • A proxy password if, by example, you're inside a corporative network security configuration.

If anything on this list is needed, you can manage that data providing the parameters in your jsoup.connect(). Please refer the official doc.

http://jsoup.org/cookbook/input/load-document-from-url



来源:https://stackoverflow.com/questions/11294765/advice-with-crawling-web-site-content

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!