问题
Webscraping https://www.nike.com/w/mens-shoes-nik1zy7ok for shoes. Right now I can retrieve the shoes that initially load, and also the shoes that load as you scroll to the next page with the following code:
import re
import json
import requests
from bs4 import BeautifulSoup
url = 'https://www.nike.com/gb/w/womens-shoes-5e1x6zy7ok'
html_data = requests.get(url).text
data = json.loads(re.search(r'window.INITIAL_REDUX_STATE=(\{.*?\});', html_data).group(1))
for p in data['Wall']['products']:
print(p['title'])
print(p['subtitle'])
print(p['price']['currentPrice'], p['price']['currency'])
print(p['colorways'][0]['images']['portraitURL'].replace('w_400', 'w_1920'))
print('-' * 120)
next_page = data['Wall']['pageData']['next']
while next_page:
u = 'https://www.nike.com' + next_page
data = requests.get(u).json()
for o in data['objects']:
p = o['productInfo'][0]
print(p['productContent']['title'])
print(p['productContent']['subtitle'])
print(p['merchPrice']['currentPrice'], p['merchPrice']['currency'])
print(p['imageUrls']['productImageUrl'])
print('-' * 120)
next_page = data.get('pages', {'next':''})['next']
How do I append all these shoes together to form a dictionary that I can print results using:
{% for shoe in shoes['Wall']['products'] %}
<p>{{shoe}}</p>
<h2>New shoe</h2>
{% endfor %}
回答1:
Here's a recursive generator function that gets the job done in a pinch, but it's a bit messy. This really needs quite a lot more to be production code, like handling request errors etc, but it should get you headed in the right direction. Do ask questions if anything is confusing, there are some difficult to understand concepts here for the uninitiated.
import re
import json
import requests
def get_shoes(url="https://www.nike.com/", path=None):
response = requests.get(f"{url}{path}")
try:
data = response.json()
products = (
{
"title": p["productContent"]["title"],
"subtitle": p["productContent"]["subtitle"],
"price": p["merchPrice"]["currentPrice"],
"currency": p["merchPrice"]["currency"],
"image_url": p["imageUrls"]["productImageUrl"],
}
for i in data["objects"]
for p in i["productInfo"]
)
next_page = data.get("pages", {"next": ""})["next"]
except json.JSONDecodeError:
data = json.loads(
re.search(r"window.INITIAL_REDUX_STATE=(\{.*?\});", response.text).group(1)
)
products = (
{
"title": p["title"],
"subtitle": p["subtitle"],
"price": p["price"]["currentPrice"],
"currency": p["price"]["currency"],
"image_url": p["colorways"][0]["images"]["portraitURL"].replace(
"w_400", "w_1920"
),
}
for p in data["Wall"]["products"]
)
next_page = data["Wall"]["pageData"]["next"]
for product in products:
yield product
if next_page:
yield from get_shoes(url, next_page)
shoes = {}
for shoe in get_shoes(path="gb/w/womens-shoes-5e1x6zy7ok"):
print(shoe["title"])
print(shoe["subtitle"])
print(shoe["price"], shoe["currency"])
print(shoe["image_url"])
print("-" * 120)
来源:https://stackoverflow.com/questions/62988117/webscraping-information-off-website-using-python-requests