问题
I want to use an html parser that does the following in a nice, elegant way
- Extract text (this is most important)
- Extract links, meta keywords
- Reconstruct original doc (optional but nice feature to have)
From my investigation so far jericho seems to fit. Any other open source libraries you guys would recommend?
回答1:
I recently experimented with HtmlCleaner and CyberNekoHtml. CyberNekoHtml is a DOM/SAX parser that produces predictable results. HtmlCleaner is a tad faster, but quite often fails to produce accurate results.
I would recommend CyberNekoHtml. CyberNekoHtml can do all of the things you mentioned. It is very easy to extract a list of all elements, and their attributes, for example. It would be possible to traverse the DOM tree building each element back into HTML if you wanted to reconstruct the page.
There's a list of open source java html parsers here: http://java-source.net/open-source/html-parsers
回答2:
I would definitely go for JSoup.
Very elegant library and does exactly what you need.
See Example Here
回答3:
I ended up using HtmlCleaner http://htmlcleaner.sourceforge.net/ for something similar. It's really easy to use and was quick for what I needed.
来源:https://stackoverflow.com/questions/2609948/text-extraction-with-java-html-parsers