selenium-webdriver

Get a page with Selenium but wait for element value to not be empty [duplicate]

人走茶凉 提交于 2020-12-12 11:57:08
问题 This question already has answers here : How to extract data from the following html? (3 answers) Assert if text within an element contains specific partial text (1 answer) Closed 18 days ago . I'm grabbing a web page using Selenium, but I need to wait for a certain value to load. I don't know what the value will be, only what element it will be present in. It seems that using the expected condition text_to_be_present_in_element_value or text_to_be_present_in_element is the most likely way

How to wait for a text field to be editable with selenium in python 3.8

江枫思渺然 提交于 2020-12-12 05:40:31
问题 I am just making a selenium bot as a fun project that is supposed to play typeracer for me, and I am having a bit of trouble getting it to wait for the countdown to be done before it tries to start typing. The best way that I have found to do this is to just wait for the text input field to be editable instead of waiting for the countdown popup to be gone, but as I said before, I can't get it to wait unless I use a time.sleep() function. This wouldn't work well because of the fact that we

Unable to locate element: {“method”:“xpath”,“selector”:“//li[@id=”tablist1-tab3“]”} error using Selenium and Java

為{幸葍}努か 提交于 2020-12-12 04:47:29
问题 I have received this error for several times: Unable to locate element: {"method":"xpath","selector":"//li[@id="tablist1-tab3"]"} Code that I have used is: options.addArguments("--headless"); options.addArguments("window-size=1200x900"); driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS); WebElement tab = driver.findElement(By.xpath("//li[@id=\"tablist1-tab3\"]")); tab.click(); Can someone help me with this error? 回答1: You need to use WebDriverWait for the elementToBeClickable()

Unable to locate element: {“method”:“xpath”,“selector”:“//li[@id=”tablist1-tab3“]”} error using Selenium and Java

為{幸葍}努か 提交于 2020-12-12 04:47:25
问题 I have received this error for several times: Unable to locate element: {"method":"xpath","selector":"//li[@id="tablist1-tab3"]"} Code that I have used is: options.addArguments("--headless"); options.addArguments("window-size=1200x900"); driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS); WebElement tab = driver.findElement(By.xpath("//li[@id=\"tablist1-tab3\"]")); tab.click(); Can someone help me with this error? 回答1: You need to use WebDriverWait for the elementToBeClickable()

When developing e2e tests, Why is data-* attributes preferred for element selection over a plain id attribute

家住魔仙堡 提交于 2020-12-11 00:46:07
问题 Cypress and many other posts around testing web applications suggest relying on a data attribute like data-cy or data-test-id for locating elements rather than relying on the id attribute. My understanding is that for two reasons: The modern way of re-using the components can lead to having multiple components of same type and can lead to multiple of those Ids in the same page - But this should also apply to the 'data-cy' or 'data-test-id' attributes When Ids are tied to CSS, there is a

Python + Selenium: get span value from “ng-bind”

你说的曾经没有我的故事 提交于 2020-12-10 11:57:29
问题 So I have Selenium code that goes to a page using chrome. Now at that page, there is this HTML; <span ngbind="pageData.Message">Heloooo</span> How can I get the value using python and Selenium? So only the Heloooo . Thanks! 回答1: You can use the following CSS Selector for locating the element: span[ngbind='pageData.Message'] Code: element = driver.find_element_by_css_selector("span[ngbind='pageData.Message']") print(element.text) # Will print the "Heloooo" value. Hope it helps you! 回答2: You

Python + Selenium: get span value from “ng-bind”

穿精又带淫゛_ 提交于 2020-12-10 11:54:07
问题 So I have Selenium code that goes to a page using chrome. Now at that page, there is this HTML; <span ngbind="pageData.Message">Heloooo</span> How can I get the value using python and Selenium? So only the Heloooo . Thanks! 回答1: You can use the following CSS Selector for locating the element: span[ngbind='pageData.Message'] Code: element = driver.find_element_by_css_selector("span[ngbind='pageData.Message']") print(element.text) # Will print the "Heloooo" value. Hope it helps you! 回答2: You

Selenium works on AWS EC2 but not on AWS Lambda

回眸只為那壹抹淺笑 提交于 2020-12-09 06:40:28
问题 I've looked at and tried nearly every other post on this topic with no luck. EC2 I'm using python 3.6 so I'm using the following AMI amzn-ami-hvm-2018.03.0.20181129-x86_64-gp2 (see here). Once I SSH into my EC2, I download Chrome with: sudo curl https://intoli.com/install-google-chrome.sh | bash cp -r /opt/google/chrome/ /home/ec2-user/ google-chrome-stable --version # Google Chrome 86.0.4240.198 And download and unzip the matching Chromedriver: sudo wget https://chromedriver.storage

Driver doesn't return proper page source

本小妞迷上赌 提交于 2020-12-08 02:35:54
问题 I'm trying to load one web page. Then scroll to the very bottom of this page (there is an infinite scroll) and get a page source code. Scrolling and loading seems to work correct but driver.page_source returns very short html which is just a little part of the whole page source . def scroll_to_the_bottom(driver): old_html = '' new_html = driver.page_source while old_html != new_html: print 'SCROLL' old_html = driver.page_source driver.execute_script("window.scrollTo(0, document.body

Unable to pass cookies between selenium and requests in order to do the scraping using the latter

这一生的挚爱 提交于 2020-12-07 06:37:30
问题 I've written a script in python in combination with selenium to log into a site and then transfer cookies from driver to requests so that I can go ahead using requests to do further activities. I used item = soup.select_one("div[class^='gravatar-wrapper-']").get("title") this line to check whether the script can fetch my username when everything is done. This is my try so far: import requests from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.common.keys