selenium

Selenium how to manage wait for page load?

我的梦境 提交于 2021-01-30 09:12:25
问题 I am developing web crawlers for a while and the most common issue for me is waiting for page to be completely loaded, includes requests, frames, scripts. I mean completely done. I used several methods to fix it but when I use more than one thread to crawl websites I always get this kind of problem. the Driver opens itself, goes through the URL, doesn't wait and goes through the next URL. My tries are: JavascriptExecutor js = (JavascriptExecutor) driver.getWebDriver(); String result = js

Selenium how to manage wait for page load?

China☆狼群 提交于 2021-01-30 09:11:12
问题 I am developing web crawlers for a while and the most common issue for me is waiting for page to be completely loaded, includes requests, frames, scripts. I mean completely done. I used several methods to fix it but when I use more than one thread to crawl websites I always get this kind of problem. the Driver opens itself, goes through the URL, doesn't wait and goes through the next URL. My tries are: JavascriptExecutor js = (JavascriptExecutor) driver.getWebDriver(); String result = js

Selenium how to manage wait for page load?

时间秒杀一切 提交于 2021-01-30 09:11:04
问题 I am developing web crawlers for a while and the most common issue for me is waiting for page to be completely loaded, includes requests, frames, scripts. I mean completely done. I used several methods to fix it but when I use more than one thread to crawl websites I always get this kind of problem. the Driver opens itself, goes through the URL, doesn't wait and goes through the next URL. My tries are: JavascriptExecutor js = (JavascriptExecutor) driver.getWebDriver(); String result = js

Selenium how to manage wait for page load?

会有一股神秘感。 提交于 2021-01-30 09:09:00
问题 I am developing web crawlers for a while and the most common issue for me is waiting for page to be completely loaded, includes requests, frames, scripts. I mean completely done. I used several methods to fix it but when I use more than one thread to crawl websites I always get this kind of problem. the Driver opens itself, goes through the URL, doesn't wait and goes through the next URL. My tries are: JavascriptExecutor js = (JavascriptExecutor) driver.getWebDriver(); String result = js

Selenium how to manage wait for page load?

戏子无情 提交于 2021-01-30 09:08:47
问题 I am developing web crawlers for a while and the most common issue for me is waiting for page to be completely loaded, includes requests, frames, scripts. I mean completely done. I used several methods to fix it but when I use more than one thread to crawl websites I always get this kind of problem. the Driver opens itself, goes through the URL, doesn't wait and goes through the next URL. My tries are: JavascriptExecutor js = (JavascriptExecutor) driver.getWebDriver(); String result = js

What is the exact purpose of Selenium Grid?

人走茶凉 提交于 2021-01-29 22:40:31
问题 I am new to Selenium, TestNG and Selenium Grid. I got slightly confused about when exactly I need to use Selenium Grid. Below are my understandings on this. Just let me know if i am right : Selenium Grid is only for running your tests remotely on another machine If I need to run my tests parallelly in my local Machine, there is no need to use Grid. That can be achieved by using TestNG only If I need to execute my test, parallelly on different remote machines, Then i have to use selenium Grid

How to retrieve the text of a WebElement using Selenium - Python

左心房为你撑大大i 提交于 2021-01-29 22:11:33
问题 I am new to Python and Web Scraping so please bear with me. I have been trying to build a web scraping tool to open a web page, log-in, and retrieve a certain value. Thus far, I have been able to open the web page and log-in. However, I simply cannot find a way to retrieve (print) the value that I require. This is what my current code looks like: from selenium import webdriver from bs4 import BeautifulSoup driver = webdriver.Chrome(executable_path=r'C:/Users/User/Downloads/chromedriver.exe')

How does chrome driver interact with Chrome browser?

淺唱寂寞╮ 提交于 2021-01-29 22:10:21
问题 It says ChromeDriver is a standalone server that implements the W3C WebDriver standard It looks like W3C WebDriver standard only defines the interface between the automation program and Chromedriver. Chromedriver act as a HTTP server to get the command from automation program. But how does ChromeDriver communicate with Chrome? Still through HTTP protocol? If yes, where could we get the documentation about the details? And what component inside of Chrome is in charge of handling the command

Printing array elements in Selenium IDE

一个人想着一个人 提交于 2021-01-29 21:49:47
问题 Since StoreEval and getEval does not work so I added javascript as in image one Previously I used to do as storeEval new Array("car","bus"); vehicles getEval myitems=0; to use the loop Output is as Running 'new array' 13:43:33 1.store on new Array("car","bus"); with value vehicles OK 13:43:34 2.executeScript on return 1 with value myitems OK 13:43:34 3.while on ${myitems}<3 OK 13:43:34 4.store on myitems with value myvar OK 13:43:34 echo: javascript{storedVars['vehicles'][storedVars['myvar']]

How to retrieve the text of a WebElement using Selenium - Python

会有一股神秘感。 提交于 2021-01-29 21:25:36
问题 I am new to Python and Web Scraping so please bear with me. I have been trying to build a web scraping tool to open a web page, log-in, and retrieve a certain value. Thus far, I have been able to open the web page and log-in. However, I simply cannot find a way to retrieve (print) the value that I require. This is what my current code looks like: from selenium import webdriver from bs4 import BeautifulSoup driver = webdriver.Chrome(executable_path=r'C:/Users/User/Downloads/chromedriver.exe')