Web scrapping with a double loop with selenium and using By.SELECTOR

筅森魡賤 提交于 2019-12-06 15:15:05

The solution below should work for you. First, we iterate over the # of years in the CSS slider. Then we work the list using your code example. Added a sleep command because I kept getting a timeout.

CODE

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time

driver = webdriver.Chrome("chromedriver.exe")
wait = WebDriverWait(driver, 10)
driver.get("http://www.motogp.com/en/Results+Statistics/")

slider = driver.find_element_by_xpath('//*[@id="handle_season"]')

for year in range(68):
    wait.until(EC.presence_of_all_elements_located((By.XPATH, '//*[@id="event"]')))    
    for item in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "#event option"))):
        item.click()
        elem = wait.until(EC.presence_of_element_located((By.CLASS_NAME, "padleft5")))
        print(elem.get_attribute("href"))
        wait.until(EC.staleness_of(elem))

    slider.send_keys(Keys.ARROW_LEFT)
    time.sleep(1)


driver.quit()

Result:

If your working behind a firewall then a lot of times your EC’s won’t work. See if a time.sleep(10) function doesn’t get you past it, instead of an EC. Secondly, check the page_source before you run the EC... if you’re behind a firewall the HTML source code will tell you.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!