HTML scraping using lxml and requests gives a unicode error [duplicate]

半世苍凉 提交于 2019-11-27 14:32:10

问题


I'm trying to use HTML scraper like the one provided here. It works fine for the example they provided. However, when I try using it with my webpage, I receive this error - Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. I've tried googling but couldn't find a solution. I'd truly appreciate any help. I'd like to know if there's a way to copy it as HTML using Python.

Edit:

from lxml import html
import requests
page = requests.get('http://cancer.sanger.ac.uk/cosmic/gene/analysis?ln=PTEN&ln1=PTEN&start=130&end=140&coords=bp%3AAA&sn=&ss=&hn=&sh=&id=15#')
tree = html.fromstring(page.text)

Thank you.


回答1:


Short answer: use page.content, not page.text.

From http://lxml.de/parsing.html#python-unicode-strings :

the parsers in lxml.etree can handle unicode strings straight away ... This requires, however, that unicode strings do not specify a conflicting encoding themselves and thus lie about their real encoding

From http://docs.python-requests.org/en/latest/user/quickstart/#response-content :

Requests will automatically decode content from the server [as r.text]. ... You can also access the response body as bytes [as r.content].

So you see, both requests.text and lxml.etree want to decode the utf-8 to unicode. But if we let requests.text do the decoding, then the encoding statement inside the xml file becomes a lie.

So, let's have requests.content do no decoding. That way lxml will receive a consistently undecoded file.



来源:https://stackoverflow.com/questions/25023237/html-scraping-using-lxml-and-requests-gives-a-unicode-error

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!