问题
I'm trying to get Google results using the following code:
Document doc = con.connect("http://www.google.com/search?q=lakshman").timeout(5000).get();
But I get this exception:
org.jsoup.HttpStatusException: HTTP error fetching URL. Status=403,URL=http://www.google.com/search?q=lakshman
A 403 error means the server is forbidding access, but I can load this URL in a web browser just fine. Why does Jsoup get a 403 error?
回答1:
You just need to add the UserAgent property to HTTP header as follows:
Jsoup.connect(itemUrl)
.userAgent("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36")
.get()
回答2:
Google doesn't allow robots, you couldn't use jsoup to connect google. You can use the Google Web Search API (Deprecated) but the number of requests you may make per day will be limited.
回答3:
Actually, you can evade 403 error by just adding a user-agent
doc = Jsoup.connect(url).timeout(timeout)
.userAgent("Mozilla")
But that is against the google policy I think.
EDIT: Google catches robots quicker than you think. You can however, use this as a temporary solution.
回答4:
try this:
Document doc =con.connect("http://www.google.com/search?q=lakshman").ignoreHttpErrors(true).timeout(5000).get();
in case userAgent did not work Just like it didn't for me.
回答5:
Replace statement
Document doc =con.connect("http://www.google.com/search?q=lakshman").timeout(5000).get();
with statement
Document doc=Jsoup.connect("http://www.google.com/search?q=lakshman").userAgent("Chrome").get();
回答6:
In some cases you need to set a referrer. It helped in my case.
The full source here
try{
String strText =
Jsoup
.connect("http://www.whatismyreferer.com")
.referrer("http://www.google.com")
.get()
.text();
System.out.println(strText);
}catch(IOException ioe){
System.out.println("Exception: " + ioe);
}
来源:https://stackoverflow.com/questions/14467459/403-error-while-getting-the-google-result-using-jsoup