问题
Is there an easy way to scrape Google and write the text (just the text) of the top N (say, 1000) .html (or whatever) documents for a given search?
As an example, imagine searching for the phrase "big bad wolf" and downloading just the text from the top 1000 hits -- i.e., actually downloading the text from those 1000 web pages (but just those pages, not the entire site).
I'm assuming this would use the urllib2 library? I use Python 3.1 if that helps.
回答1:
The official way to get results from Google programmatically is to use Google's Custom Search API. As icktoofay comments, other approaches (such as directly scraping the results or using the xgoogle module) break Google's terms of service. Because of that, you might want to consider using the API from another search engine, such as the Bing API or Yahoo!'s service.
回答2:
Check out BeautifulSoup for scraping the content out of web pages. It is supposed to be very tolerant of broken web pages which will help because not all results are well formed. So you should be able to:
- Request http://www.google.ca/search?q=QUERY_HERE
- Extract and follow result links using BeautifulSoup (It appears as though class="r" for result links)
- Extract text from result pages using BeautifulSoup
回答3:
As mentioned, scraping Google violates their TOS. That said, that's probably not the answer you're looking for.
There's a PHP script available that does a perfect job of scraping Google: http://google-scraper.squabbel.com/ Just give it a keyword, # of results you want, and it'll return all the results for you. Just parse for the URLs returned, use urllib, or curl to extract the HTML source, and you're done.
You also really shouldn't attempt to scrape Google unless you got more than 100 proxy servers though. They'll easily ban your IP temporarily after a few attempts.
来源:https://stackoverflow.com/questions/5321434/python-easy-way-to-scrape-google-download-top-n-hits-entire-html-documents