Fetch all href link using selenium in python

前端 未结 6 2043
孤街浪徒
孤街浪徒 2020-12-04 14:41

I am practicing Selenium in Python and I wanted to fetch all the links on a web page using Selenium.

For example, I want all the links in the href= prop

相关标签:
6条回答
  • 2020-12-04 15:07

    Unfortunately, the original link posted by OP is dead...

    If you're looking for a way to scrape links on a page, here's how you can scrape all of the "Hot Network Questions" links on this page with gazpacho:

    from gazpacho import Soup
    
    url = "https://stackoverflow.com/q/34759787/3731467"
    
    soup = Soup.get(url)
    a_tags = soup.find("div", {"id": "hot-network-questions"}).find("a")
    
    [a.attrs["href"] for a in a_tags]
    
    0 讨论(0)
  • 2020-12-04 15:09

    You can import the HTML dom using html dom library in python. You can find it over here and install it using PIP:

    https://pypi.python.org/pypi/htmldom/2.0

    from htmldom import htmldom
    dom = htmldom.HtmlDom("https://www.github.com/")  
    dom = dom.createDom()
    

    The above code creates a HtmlDom object.The HtmlDom takes a default parameter, the url of the page. Once the dom object is created, you need to call "createDom" method of HtmlDom. This will parse the html data and constructs the parse tree which then can be used for searching and manipulating the html data. The only restriction the library imposes is that the data whether it is html or xml must have a root element.

    You can query the elements using the "find" method of HtmlDom object:

    p_links = dom.find("a")  
    for link in p_links:
      print ("URL: " +link.attr("href"))
    

    The above code will print all the links/urls present on the web page

    0 讨论(0)
  • 2020-12-04 15:22
    import requests
    from selenium import webdriver
    import bs4
    driver = webdriver.Chrome(r'C:\chromedrivers\chromedriver') #enter the path
    data=requests.request('get','https://google.co.in/') #any website
    s=bs4.BeautifulSoup(data.text,'html.parser')
    for link in s.findAll('a'):
        print(link)
    
    0 讨论(0)
  • 2020-12-04 15:27

    You can try something like:

        links = driver.find_elements_by_partial_link_text('')
    
    0 讨论(0)
  • 2020-12-04 15:29

    Well, you have to simply loop through the list:

    elems = driver.find_elements_by_xpath("//a[@href]")
    for elem in elems:
        print(elem.get_attribute("href"))
    

    find_elements_by_* returns a list of elements (note the spelling of 'elements'). Loop through the list, take each element and fetch the required attribute value you want from it (in this case href).

    0 讨论(0)
  • 2020-12-04 15:30

    I have checked and tested that there is a function named find_elements_by_tag_name() you can use. This example works fine for me.

    elems = driver.find_elements_by_tag_name('a')
        for elem in elems:
            href = elem.get_attribute('href')
            if href is not None:
                print(href)
    
    0 讨论(0)
提交回复
热议问题