Screen Scraping in Python

孤人 提交于 2020-01-04 02:31:07

问题


I'm new to the whole concept of screen scraping in Python, although I've done a bit of screen scraping in R. I'm trying to scrape the Yelp website. I'm trying to scrape the names of each insurance agency which the yelp search returns. With most scraping tasks, I'm able to perform the following task, but always have a hard time going forward with parsing the xml.

import urllib2
from BeautifulSoup import BeautifulSoup

soup = BeautifulSoup(urllib2.urlopen('http://www.yelp.com/search?find_desc=insurance+agency&ns=1&find_loc=Austin').read())

print soup

So when scraping a site, what are the steps that one should follow? Is there a set of necessary actions that one needs to take each time they attempt to scrape a site?

I'm running Python 2.6 on Ubuntu 10.10

I realize that this may be a poor SO question as outlined in the faq, but I'm hoping someone can provide some general guidelines and things to consider when scraping a site.


回答1:


I'd recommend read up on xpath & try this scrapy tutorial. http://doc.scrapy.org/intro/tutorial.html . It is fairly easy to write a spider like this

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector

class DmozSpider(BaseSpider):
    name = "dmoz.org"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]
def parse(self, response):
    hxs = HtmlXPathSelector(response)
    sites = hxs.select('//ul/li')
    for site in sites:
        title = site.select('a/text()').extract()
        link = site.select('a/@href').extract()
        desc = site.select('text()').extract()
        print title, link, desc



回答2:


To ease the common tasks associated with screen scraping, a python framework "Scrapy" exists. It will make html, xml parsing painless.




回答3:


What you might be experiencing is that you are having trouble parsing content that is dynamically generated with javascript. I wrote a small tutorial on this subject, this might help:

http://koaning.github.io/html/scapingdynamicwebsites.html

Basically what you do is you have the selenium library pretend that it is a firefox browser, the browser will wait until all javascript has loaded before it continues passing you the html string. Once you have this string, you can then parse it with beautifulsoup.



来源:https://stackoverflow.com/questions/6529633/screen-scraping-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!