xpath

How to select the auto suggestion from the dynamic dropdown using Selenium and Java

China☆狼群 提交于 2021-02-10 13:03:32
问题 I am trying to select the value for Subjects field in the following form: https://demoqa.com/automation-practice-form It is an input field that dynamically gives suggestions based on our input and later we need to select values from those suggestions. I am unable to select the desired value. The Below code only populates the input area but the value is not selected. driver.findElement(By.id("subjectsInput")).sendKeys("English"); driver.findElement(By.id("subjectsInput")).click(); //This line

Can't get selenium to click button

两盒软妹~` 提交于 2021-02-10 12:42:34
问题 Pic of the website's inspect element More in Depth pic My Code snippet from selenium import webdriver from selenium.webdriver.common.keys import Keys import time from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from time import sleep import requests /// excel = driver.find_element_by_name('Excel') excel.click() I

Selenium Python Get Img SRC Returns Actual Image Data

萝らか妹 提交于 2021-02-10 12:22:31
问题 I am working with Selenium in Python and using Firefox web driver. I am trying to get the SRC of an image. When I first request the SRC I get the actual image data, not the SRC data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQ ... If I run the exact same code a second time I will get the SRC example.jpg Here is my code fireFoxOptions = webdriver.FirefoxOptions() fireFoxOptions.set_headless() browser = webdriver.Firefox(firefox_options=fireFoxOptions) element = browser.find_element(By.ID ,

Selenium Python Get Img SRC Returns Actual Image Data

烂漫一生 提交于 2021-02-10 12:22:28
问题 I am working with Selenium in Python and using Firefox web driver. I am trying to get the SRC of an image. When I first request the SRC I get the actual image data, not the SRC data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQ ... If I run the exact same code a second time I will get the SRC example.jpg Here is my code fireFoxOptions = webdriver.FirefoxOptions() fireFoxOptions.set_headless() browser = webdriver.Firefox(firefox_options=fireFoxOptions) element = browser.find_element(By.ID ,

Scrapy only returns first result

南笙酒味 提交于 2021-02-10 09:35:51
问题 I'm trying to scrape preformatted html seen here. But my code only returns 1 price instead of all 10 prices. Code seen here: class MySpider(BaseSpider): name = "working1" allowed_domains = ["steamcommunity.com"] start_urls = ["http://steamcommunity.com/market/search/render/?query=&appid=440"] def parse(self, response): sel = Selector(response) price = sel.xpath("//text()[contains(.,'$')]").extract()[0].replace('\\r\\n\\t\\t\\t\\r\\n\\t\\t\\t','') print price I'm super new to scrapy/xpath so I

Scrapy only returns first result

。_饼干妹妹 提交于 2021-02-10 09:31:52
问题 I'm trying to scrape preformatted html seen here. But my code only returns 1 price instead of all 10 prices. Code seen here: class MySpider(BaseSpider): name = "working1" allowed_domains = ["steamcommunity.com"] start_urls = ["http://steamcommunity.com/market/search/render/?query=&appid=440"] def parse(self, response): sel = Selector(response) price = sel.xpath("//text()[contains(.,'$')]").extract()[0].replace('\\r\\n\\t\\t\\t\\r\\n\\t\\t\\t','') print price I'm super new to scrapy/xpath so I

Scrapy only returns first result

家住魔仙堡 提交于 2021-02-10 09:30:49
问题 I'm trying to scrape preformatted html seen here. But my code only returns 1 price instead of all 10 prices. Code seen here: class MySpider(BaseSpider): name = "working1" allowed_domains = ["steamcommunity.com"] start_urls = ["http://steamcommunity.com/market/search/render/?query=&appid=440"] def parse(self, response): sel = Selector(response) price = sel.xpath("//text()[contains(.,'$')]").extract()[0].replace('\\r\\n\\t\\t\\t\\r\\n\\t\\t\\t','') print price I'm super new to scrapy/xpath so I

rvest function html_nodes returns {xml_nodeset (0)}

余生长醉 提交于 2021-02-10 06:13:05
问题 I am trying to scrape data frame the following website http://stats.nba.com/game/0041700404/playbyplay/ I'd like to create a table that includes the date of the game, the scores throughout the game, and the team names I am using the following code: game1 <- read_html("http://stats.nba.com/game/0041700404/playbyplay/") #Extracts the Date html_nodes(game1, xpath = '//*[contains(concat( " ", @class, " " ), concat( " ", "game-summary-team--vtm", " " ))]//*[contains(concat( " ", @class, " " ),

rvest function html_nodes returns {xml_nodeset (0)}

こ雲淡風輕ζ 提交于 2021-02-10 06:12:28
问题 I am trying to scrape data frame the following website http://stats.nba.com/game/0041700404/playbyplay/ I'd like to create a table that includes the date of the game, the scores throughout the game, and the team names I am using the following code: game1 <- read_html("http://stats.nba.com/game/0041700404/playbyplay/") #Extracts the Date html_nodes(game1, xpath = '//*[contains(concat( " ", @class, " " ), concat( " ", "game-summary-team--vtm", " " ))]//*[contains(concat( " ", @class, " " ),

Most Pythonic way to find the sibling of an element in XML

☆樱花仙子☆ 提交于 2021-02-10 06:08:32
问题 Problem: I have the following XML snippet: ...snip... <p class="p_cat_heading">DEFINITION</p> <p class="p_numberedbullet"><span class="calibre10">This</span>, <span class="calibre10">these</span>. </p> <p class="p_cat_heading">PRONUNCIATION </p> ..snip... I need to search the totality of the XML, find the heading that has text DEFINITION , and print the associated definitions. The format of the definitions is not consistent and can change attributes/elements so the only reliable way of