How to find out the correct encoding when using beautifulsoup?

懵懂的女人 提交于 2021-01-28 20:15:48

问题


In python3 and beautifulsoup4 I want to get information from a website, after making the requests. I did so:

import requests
from bs4 import BeautifulSoup

req = requests.get('https://sisgvarmazenamento.blob.core.windows.net/prd/PublicacaoPortal/Arquivos/201901.htm').text

soup = BeautifulSoup(req,'lxml')

soup.find("h1").text
'\r\n                        CÃ\x82MARA MUNICIPAL DE SÃ\x83O PAULO'

I do not know what the encoding is, but it's a site with Brazilian Portuguese, so it should be utf-8 or latin1

Please, is there a way to find out which encoding is correct?

And then do the beautifulsoup read this encoding correctly?


回答1:


Requests determines encoding like this:

When you receive a response, Requests makes a guess at the encoding to use for decoding the response when you access the Response.text attribute. Requests will first check for an encoding in the HTTP header, and if none is present, will use chardet to attempt to guess the encoding.

The only time Requests will not do this is if no explicit charset is present in the HTTP headers and the Content-Type header contains text. In this situation, RFC 2616 specifies that the default charset must be ISO-8859-1. Requests follows the specification in this case. If you require a different encoding, you can manually set the Response.encoding property, or use the raw Response.content.

Inspecting the request headers show that indeed "no explicit charset is present in the HTTP headers and the Content-Type header contains text"

>>> req.headers['content-type']
'text/html'

So requests faithfully follows the standard and decodes as ISO-8859-1 (latin-1).

In the response content, a charset is specified:

<META http-equiv="Content-Type" content="text/html; charset=utf-16">

however this is wrong: decoding as UTF-16 produces mojibake.

chardet correctly identifies the encoding as UTF-8.

So to summarise:

  • there is no general way to determine text encoding with complete accuracy
  • in this particular case, the correct encoding is UTF-8.

Working code:

>>> req.encoding = 'UTF-8'
>>> soup = bs4.BeautifulSoup(req.text,'lxml')
>>> soup.find('h1').text
'\r\n                        CÂMARA MUNICIPAL DE SÃO PAULO'



回答2:


When you use requests, you can use the encoding function, for example:

req = requests.get('https://sisgvarmazenamento.blob.core.windows.net/prd/PublicacaoPortal/Arquivos/201901.htm')

encoding = req.encoding
text = req.content

decoded_text = text.decode(encoding)


来源:https://stackoverflow.com/questions/56385353/how-to-find-out-the-correct-encoding-when-using-beautifulsoup

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!