问题
I have a page that i need to get the source to use with BS4, but the middle of the page takes 1 second(maybe less) to load the content, and requests.get catches the source of the page before the section loads, how can I wait a second before getting the data?
r = requests.get(URL + self.search, headers=USER_AGENT, timeout=5 )
soup = BeautifulSoup(r.content, 'html.parser')
a = soup.find_all('section', 'wrapper')
The page
<section class="wrapper" id="resultado_busca">
回答1:
It doesn't look like a problem of waiting, it looks like the element is being created by JavaScript, requests can't handle dynamically generated elements by JavaScript. A suggestion is to use selenium together with PhantomJS to get the page source, then you can use BeautifulSoup for your parsing, the code shown below will do exactly that:
from bs4 import BeautifulSoup
from selenium import webdriver
url = "http://legendas.tv/busca/walking%20dead%20s03e02"
browser = webdriver.PhantomJS()
browser.get(url)
html = browser.page_source
soup = BeautifulSoup(html, 'lxml')
a = soup.find('section', 'wrapper')
Also, there's no need to use .findAll if you are only looking for one element only.
回答2:
In Python 3, Using the module urllib in practice works better when loading dynamic webpages than the requests module.
i.e
import urllib.request
try:
with urllib.request.urlopen(url) as response:
html = response.read().decode('utf-8')#use whatever encoding as per the webpage
except urllib.request.HTTPError as e:
if e.code==404:
print(f"{url} is not found")
elif e.code==503:
print(f'{url} base webservices are not available')
## can add authentication here
else:
print('http error',e)
来源:https://stackoverflow.com/questions/45448994/wait-page-to-load-before-getting-data-with-requests-get-in-python-3