I'm trying to search a webpage using regular expressions, but I'm getting the following error:
TypeError: can't use a string pattern on a bytes-like object
I understand why, urllib.request.urlopen() returns a bytestream and so, at least I'm guessing, re doesn't know the encoding to use. What am I supposed to do in this situation? Is there a way to specify the encoding method in a urlrequest maybe or will I need to re-encode the string myself? If so what am I looking to do, I assume I should read the encoding from the header info or the encoding type if specified in the html and then re-encode it to that?
You just need to decode the response, using the Content-Type
header typically the last value. There is an example given in the tutorial too.
output = response.decode('utf-8')
As for me, the solution is as following (python3):
resource = urllib.request.urlopen(an_url)
content = resource.read().decode(resource.headers.get_content_charset())
With requests:
import requests
response = requests.get(URL).text
I had the same issues for the last two days. I finally have a solution.
I'm using the info()
method of the object returned by urlopen()
:
req=urllib.request.urlopen(URL)
charset=req.info().get_content_charset()
content=req.read().decode(charset)
urllib.urlopen(url).headers.getheader('Content-Type')
Will output something like this:
text/html; charset=utf-8
after you make a request req = urllib.request.urlopen(...)
you have to read the request by calling html_string = req.read()
that will give you the string response that you can then parse the way you want.
来源:https://stackoverflow.com/questions/4981977/how-to-handle-response-encoding-from-urllib-request-urlopen