I wanted to make a Java based web crawler for an experiment. I heard that making a Web Crawler in Java was the way to go if this is your first time. However, I have two impo
For parsing content, I'm using Apache Tika.
I recommend you to use the HttpClient library. You can found examples here.
Though mainly used for Unit Testing web applications, HttpUnit traverses a website, clicks links, analyzes tables and form elements, and gives you meta data about all the pages. I use it for Web Crawling, not just for Unit Testing. - http://httpunit.sourceforge.net/
This is How your program 'visit' or 'connect' to web pages.
URL url;
InputStream is = null;
DataInputStream dis;
String line;
try {
url = new URL("http://stackoverflow.com/");
is = url.openStream(); // throws an IOException
dis = new DataInputStream(new BufferedInputStream(is));
while ((line = dis.readLine()) != null) {
System.out.println(line);
}
} catch (MalformedURLException mue) {
mue.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
} finally {
try {
is.close();
} catch (IOException ioe) {
// nothing to see here
}
}
This will download source of html page.
For HTML parsing see this
Also take a look at jSpider and jsoup
I would prefer crawler4j. Crawler4j is an open source Java crawler which provides a simple interface for crawling the Web. You can setup a multi-threaded web crawler in few hours.
Right now there is a inclusion of many java based HTML parser that support visiting and parsing the HTML pages.
Here's the complete list of HTML parser with basic comparison.