问题
I crawled the following page:
http://www.nasa.gov/topics/earth/features/plains-tornadoes-20120417.html
But I got Segmentation fault (core dumped) when calling: BeautifulSoup(page_html), where page_html is the content from requests library. Is this a bug for BeautifulSoup? Is there any way to get around with this? Even approach like try...except would help me to get my code running. Thanks in advance.
The code is as following:
import requests
from bs4 import BeautifulSoup
toy_url = 'http://www.nasa.gov/topics/earth/features/plains-tornadoes-20120417.html'
res = requests.get(toy_url,headers={"USER-Agent":"Firefox/12.0"})
page = res.content
soup = BeautifulSoup(page)
回答1:
This problem is caused by a bug in lxml, which is fixed in lxml 2.3.5. You can upgrade lxml, or use Beautiful Soup with the html5lib or the HTMLParser parser.
回答2:
Definitely a bug. Shouldn't be able to segfault this way. I can reproduce (4.0.1):
>>> import bs4, urllib2
>>> url = "http://www.nasa.gov/topics/earth/features/plains-tornadoes-20120417.html"
>>> page = urllib2.urlopen(url).read()
>>> soup = bs4.BeautifulSoup(page)
Segmentation fault
After some bisecting, it looks to be caused by the DOCTYPE:
>>> page[:page.find(">")+1]
'<!DOCTYPE "xmlns:xsl=\'http://www.w3.org/1999/XSL/Transform\'">'
And a crude hack allows bs4 to parse it:
>>> soup = bs4.BeautifulSoup(page[page.find(">")+1:])
>>> soup.find_all("a")[:3]
[<a href="/home/How_to_enable_Javascript.html" target="_blank">› Learn How</a>, <a href="#maincontent">Follow this link to skip to the main content</a>, <a class="nasa_logo" href="/home/index.html"><span class="hide">NASA - National Aeronautics and Space Administration</span></a>]
Someone who knows more might be able to see what's really going on, but that might help you get started, anyway.
来源:https://stackoverflow.com/questions/13323469/beautifulsoup-4-segmentation-fault-core-dumped