问题
I am trying to web scrap this page and the code i use is this:
page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page")
I get this error when i run this code:
Traceback (most recent call last):
File "/Users/lakesh/WebScraping/Gold.py", line 46, in <module>
page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page")
File "/Library/Python/2.7/site-packages/requests/api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "/Library/Python/2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/Library/Python/2.7/site-packages/requests/sessions.py", line 512, in request
resp = self.send(prep, **send_kwargs)
File "/Library/Python/2.7/site-packages/requests/sessions.py", line 622, in send
r = adapter.send(request, **kwargs)
File "/Library/Python/2.7/site-packages/requests/adapters.py", line 511, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='www.uobgroup.com', port=443): Max retries exceeded with url: /online-rates/gold-and-silver-prices.page (Caused by SSLError(SSLError(1, u'[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:590)'),))
Tried this as well:
page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page",verify=False)
This doesn't work as well. Need some guidance.
Full code:
from requests import get
import requests
from requests.exceptions import RequestException
from contextlib import closing
from bs4 import BeautifulSoup
from collections import defaultdict
import json
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'DES-CBC3-SHA'
page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page")
html = BeautifulSoup(page.content, 'html.parser')
result = defaultdict(list)
last_table = html.find_all('table')[-1]
回答1:
I added the verify=False
option, and also took out the line that is setting the cypher. Once I did this, your code worked for me in Python 3...sometimes. It works once, and then seems to not work for a while. My guess is that the site is rate-limiting access, possibly based on the agent signature it sees, trying to limit bot access. I printed last_table
when it worked, and here's what I got:
<table class="responsive-table-rates table table-striped table-bordered" id="nova-funds-list-table">
<tbody>
<tr>
<td style="background-color: #002265; text-align: center; color: #ffffff;">DESCRIPTION</td>
<td style="background-color: #002265; text-align: center; color: #ffffff;">CURRENCY</td>
<td style="background-color: #002265; text-align: center; color: #ffffff;">UNIT</td>
<td style="background-color: #002265; text-align: center; color: #ffffff;">BANK SELLS</td>
<td style="background-color: #002265; text-align: center; color: #ffffff;">BANK BUYS</td>
<td style="text-align: left; display: none;"> </td>
<td style="text-align: left; display: none;"> </td>
</tr>
</tbody>
</table>
I am dumping the incoming contents to a file. When it works, I get readable HTML. When it doesn't work, I get a few readable lines at the top, and then a bunch of gibberish that may be some complex Javascript. Not sure what that is. When it doesn't work, I get this:
Traceback (most recent call last): File "/Users/stevenjohnson/lab/so/ReadAFile.py", line 8, in last_table = html.find_all('table')[-1] IndexError: list index out of range
I get back a 200 status code in either case.
Here's my version of the code:
from requests import get
from bs4 import BeautifulSoup
from collections import defaultdict
page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page", verify=False)
html = BeautifulSoup(page.content, 'html.parser')
result = defaultdict(list)
last_table = html.find_all('table')[-1]
print(last_table)
I'm on a Mac. Maybe you're not, and the certificate chains on your machine are different than on mine, and so you're not able to get as far as I can. I wanted you to know, however, that your code does work for me with just verify=False
.
来源:https://stackoverflow.com/questions/55886488/max-retries-exceeded-with-url-requests-python