Crawler4j vs. Jsoup for the pages crawling and parsing in Java
问题 I want to get the content of a page and extract the specific parts of it. As far as I know, there are at least two solutions for such task: Crawler4j and Jsoup. Both of them are capable retrieving the content of a page and extract sub-parts of it. The only thing I'm not sure about, what is the difference between them? There is a similar question, which is marked as answered: Crawler4j is a crawler, Jsoup is a parser. But I just checked, Jsoup is also capable crawling a page in addition to a