BeautifulSoup parser can't access html elements

后端 未结 1 1881
长情又很酷
长情又很酷 2020-12-22 12:29

I am trying to scrape the hrefs of all the listings. I am fairly new to beautifulsoup and have done a bit of scraping before, but have done some scraping before. But I can\'

相关标签:
1条回答
  • 2020-12-22 13:01

    The page is rendered with JavaScript. There are several ways to render and scrape it.

    I can scrape it with Selenium. First install Selenium:

    sudo pip3 install selenium
    

    Then get a driver https://sites.google.com/a/chromium.org/chromedriver/downloads you can use a headless version of chrome "Chrome Canary" if you are on Windows or Mac.

    from bs4 import BeautifulSoup
    from selenium import webdriver
    
    browser = webdriver.Chrome()
    url = ('https://www.takealot.com/computers/laptops-10130')
    browser.get(url)
    respData = browser.page_source
    browser.quit()
    soup = BeautifulSoup(respData, 'html.parser')
    containers = soup.find_all("div", {"class": "p-data left"})
    for container in containers:
        print(container.text)
        print(container.find("span", {"class": "amount"}).text)
    

    Alternatively use PyQt5

    from PyQt5.QtGui import *
    from PyQt5.QtCore import *
    from PyQt5.QtWebKit import *
    from PyQt5.QtWebKitWidgets import QWebPage
    from PyQt5.QtWidgets import QApplication
    from bs4 import BeautifulSoup
    import sys
    
    
    class Render(QWebPage):
        def __init__(self, url):
            self.app = QApplication(sys.argv)
            QWebPage.__init__(self)
            self.loadFinished.connect(self._loadFinished)
            self.mainFrame().load(QUrl(url))
            self.app.exec_()
    
        def _loadFinished(self, result):
            self.frame = self.mainFrame()
            self.app.quit()
    
    url = 'https://www.takealot.com/computers/laptops-10130'
    r = Render(url)
    respData = r.frame.toHtml()
    soup = BeautifulSoup(respData, 'html.parser')
    containers = soup.find_all("div", {"class": "p-data left"})
    for container in containers:
        print (container.text)
        print (container.find("span", {"class":"amount"}).text)
    

    Alternatively use dryscrape:

    from bs4 import BeautifulSoup
    import dryscrape
    
    url = 'https://www.takealot.com/computers/laptops-10130'
    session = dryscrape.Session()
    session.visit(url)
    respData = session.body()
    soup = BeautifulSoup(respData, 'html.parser')
    containers = soup.find_all("div", {"class": "p-data left"})
    for container in containers:
        print(container.text)
        print(container.find("span", {"class": "amount"}).text)
    

    Outputs in all cases:

    Dell Inspiron 3162 Intel Celeron 11.6" Wifi Notebook (Various Colours)11.6 Inch Display; Wifi Only (Red; White & Blue Available)R 3,999R 4,999i20% OffeB 39,990Discovery Miles 39,990On Credit: R 372 / monthi
    3,999
    HP 250 G5 Celeron N3060 Notebook - Dark ash silverNBHPW4M70EAR 4,499R 4,999ieB 44,990Discovery Miles 44,990On Credit: R 419 / monthiIn StockShippingThis item is in stock in our CPT warehouse and can be shipped from there. You can also collect it yourself from our warehouse during the week or over weekends.CPT | ShippingThis item is in stock in our JHB warehouse and can be shipped from there. No collection facilities available, sorry!JHBWhen do I get it?
    4,499
    Asus Vivobook ...
    

    However when testing with your URL I observed the results were not reproducible every time, occasionally I got no content in "containers" after the page had rendered.

    0 讨论(0)
提交回复
热议问题