Crawler4j vs. Jsoup for the pages crawling and parsing in Java

旧时模样 提交于 2019-12-09 09:35:45

问题


I want to get the content of a page and extract the specific parts of it. As far as I know, there are at least two solutions for such task: Crawler4j and Jsoup.

Both of them are capable retrieving the content of a page and extract sub-parts of it. The only thing I'm not sure about, what is the difference between them? There is a similar question, which is marked as answered:

Crawler4j is a crawler, Jsoup is a parser.

But I just checked, Jsoup is also capable crawling a page in addition to a parsing functionality, while Crawler4j is capable not only crawling the page but parsing its content.

Thus, can you, please, clarify the difference between Crawler4j and Jsoup?


回答1:


Crawling is something bigger than just retrieving the contents of a single URI. If you just want to retrieve the content of some pages then there is no real benefit from using something like Crawler4J.

Let's take a look at an example. Assume you want to crawl a website. The requirements would be:

  1. Give base URI (home page)
  2. Take all the URIs from each page and retrieve the contents of those too.
  3. Move recursively for every URI you retrieve.
  4. Retrieve the contents only of URIs that are inside this website (there could be external URIs referencing another website, we don't need those).
  5. Avoid circular crawling. Page A has URI for page B (of the same site). Page B has URI for page A, but we already retrieved the content of page A (the About page has a link for the Home page, but we already got the contents of Home page so don't visit it again).
  6. The crawling operation must be multithreaded
  7. The website is vast. It contains a lot of pages. We only want to retrieve 50 URIs beginning from Home page.

This is a simple scenario. Try solving this with Jsoup. All this functionality must be implemented by you. Crawler4J or any crawler micro framework for that matter, would or should have an implementation for the actions above. Jsoup's strong qualities shine when you get to decide what to do with the content.

Let's take a look at some requirements for parsing.

  1. Get all paragraphs of a page
  2. Get all images
  3. Remove invalid tags (tags that do not comply to the HTML specs)
  4. Remove script tags

This is where Jsoup comes to play. Of course, there is some overlapping here. Some things might be possible with both Crawler4J or Jsoup, but that doesn't make them equivalent. You could remove the mechanism of retrieving content from Jsoup and still be an amazing tool to use. If Crawler4J would remove the retrieval, then it would lose half of its functionality.

I used both of them in the same project in a real life scenario. I crawled a site, leveraging the strong points of Crawler4J, for all the problems mentioned in the first example. Then I passed the content of each page I retrieved to Jsoup, in order to extract the information I needed. Could I have not used one or the other? Yes, I could, but I would have had to implement all the missing functionality.

Hence the difference, Crawler4J is a crawler with some simple operations for parsing (you could extract the images in one line), but there is no implementation for complex CSS queries. Jsoup is a parser that gives you a simple API for HTTP requests. For anything more complex there is no implementation.



来源:https://stackoverflow.com/questions/34888510/crawler4j-vs-jsoup-for-the-pages-crawling-and-parsing-in-java

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!