Webpage values are missing while scraping data using BeautifulSoup python 3.6

删除回忆录丶 提交于 2020-01-24 09:45:31

问题


I am using below script to scrap "STOCK QUOTE" data from http://fortune.com/fortune500/xcel-energy/, But its giving blank.

I have used selenium driver also, but same issue. Please help on this.

import requests
from bs4 import BeautifulSoup as bs
import pandas as pd

r = requests.get('http://fortune.com/fortune500/xcel-energy/')
soup = bs(r.content, 'lxml') # tried: 'html.parser

data = pd.DataFrame(columns=['C1','C2','C3','C4'], dtype='object', index=range(0,11))
for table in soup.find_all('div', {'class': 'stock-quote row'}):
    row_marker = 0
    for row in table.find_all('li'):
    column_marker = 0
    columns = row.find_all('span')
    for column in columns:
        data.iat[row_marker, column_marker] = column.get_text()
        column_marker += 1
    row_marker += 1
print(data)

Output getting:

              C1    C2   C3   C4
0       Previous Close:         NaN  NaN
1           Market Cap:   NaNB  NaN    B
2   Next Earnings Date:         NaN  NaN
3                 High:         NaN  NaN
4                  Low:         NaN  NaN
5         52 Week High:         NaN  NaN
6          52 Week Low:         NaN  NaN
7     52 Week Change %:   0.00  NaN  NaN
8            P/E Ratio:    n/a  NaN  NaN
9                  EPS:         NaN  NaN
10      Dividend Yield:    n/a  NaN  NaN


回答1:


It looks like the data you are looking for is available at this API endpoint:

import requests

response = requests.get("http://fortune.com/api/v2/company/xel/expand/1")
data = response.json()
print(data['ticker'])

FYI, when opening the page in an selenium-automated browser, you just need to make sure you wait for the desired data to appear before parsing the HTML, working code:

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd


url = 'http://fortune.com/fortune500/xcel-energy/'
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)

driver.get(url)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".stock-quote")))

page_source = driver.page_source
driver.close()

# HTML parsing part
soup = BeautifulSoup(page_source, 'lxml') # tried: 'html.parser

data = pd.DataFrame(columns=['C1','C2','C3','C4'], dtype='object', index=range(0,11))
for table in soup.find_all('div', {'class': 'stock-quote'}):
    row_marker = 0
    for row in table.find_all('li'):
        column_marker = 0
        columns = row.find_all('span')
        for column in columns:
            data.iat[row_marker, column_marker] = column.get_text()
            column_marker += 1
        row_marker += 1
print(data)


来源:https://stackoverflow.com/questions/45533571/webpage-values-are-missing-while-scraping-data-using-beautifulsoup-python-3-6

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!