selenium

WebScraping javascript page in python

五迷三道 提交于 2021-02-05 12:13:02
问题 Hello World, New in Python, I am trying to webscrape a javascript page : https://search.gleif.org/#/search/ Please find below the result from my code (using request) <!DOCTYPE html> <html> <head><meta charset="utf-8"/> <meta content="width=device-width,initial-scale=1" name="viewport"/> <title>LEI Search 2.0</title> <link href="/static/icons/favicon.ico" rel="shortcut icon" type="image/x-icon"/> <link href="https://fonts.googleapis.com/css?family=Open+Sans:200,300,400,600,700,900&subset

Alert is not displayed

狂风中的少年 提交于 2021-02-05 11:34:47
问题 My test checks whether the user can log in to the site. I wrote a code to alert that works but showing parts in catch . I tried to write alert without try and catch but displays an error. How to write alert without try and catch and alert to be displayed(test to pass). org.openqa.selenium.TimeoutException: Expected condition failed: waiting for alert to be present (tried for 5 second(s) with 500 MILLISECONDS interval) This is my code: public void getMessage() { //Verify that is message

JavaFX Thread freeze

时间秒杀一切 提交于 2021-02-05 10:49:50
问题 I'm currently working on a JavaFX project. On GUI initialization I want to read some infos out of a HTML document using Selenium and FirefoxDriver. Normally I would use a crawler to get the infos but this document is full of JavaScript so I was only able to get to the infos using Selenium (I know, it's really bad). Now I've got the problem that this process takes up to 15 seconds and I want to show the progress of Selenium on a JavaFX progress bar. So I've set up a Thread doing all the work

Adding an open close Google Chrome browser to Selenium linkedin_scraper code

ε祈祈猫儿з 提交于 2021-02-05 10:49:14
问题 I am trying to scrape some LinkedIn profiles of well known people. The code takes a bunch of LinkedIn profile URLS and then uses Selenium and scrape_linkedin to collect the information and save it into a folder as a .json file. The problem I am running into is that LinkedIn naturally blocks the scraper from collecting some profiles. I am always able to get the first profile in the list of URLs. I put this down to the fact that it opens a new Google Chrome window and then goes to the LinkedIn

How to scrape the first element of each parent using from The Wall Street Journal market-data quotes using Selenium and Python?

て烟熏妆下的殇ゞ 提交于 2021-02-05 10:46:21
问题 Here is the HTML that I'm trying to scrape: I am trying to get the first instance of 'td' under each 'tr' using Selenium (beautifulsoup won't work for this site). The list is very long so I am trying to do it iteratively. Here is my code: from selenium import webdriver import os # define path to chrome driver chrome_driver = os.path.abspath('C:/Users/USER/Desktop/chromedriver.exe') browser = webdriver.Chrome(chrome_driver) browser.get("https://www.wsj.com/market-data/quotes/MET/financials

How to handle lazy-loaded images in selenium?

爱⌒轻易说出口 提交于 2021-02-05 10:46:11
问题 Before marking as duplicate, please consider that I have already looked through many related stack overflow posts, as well as websites and articles. I have not found a solution yet. This question is a follow up to this question here Selenium Webdriver not finding XPATH despite seemingly identical strings. I determined the problem did not in fact come from the xpath method by updating the code to work in a more elegant manner: for item in feed: img_div = item.find_element_by_class_name(

How to run selenium scripts written in java from jmeter?

亡梦爱人 提交于 2021-02-05 09:43:48
问题 I am trying to use my Selenium scripts in java with JMeter's WebDriver Sampler. Inside the webdriver sampler, the language is seleced to java, and the following code added: package automationFramework; public class FirstTestCase { public static void main(String[] args) { // Create a new instance of the Firefox driver WebDriver driver = new ChromeDriver(); //Launch the Online Store Website driver.get("www.google.com"); // Print a Log In message to the screen System.out.println("Successfully

javascript error: arguments[0].scrollIntoView is not a function using selenium on python

廉价感情. 提交于 2021-02-05 09:31:45
问题 I'm using Selenium on python and I would like to scroll to an element to click on it. Everywhere I see that the rigth things to do to go directly to the element is to use : driver = webdriver.Chrome() driver.get(url) element = driver.find_elements_by_class_name('dg-button') driver.execute_script("return arguments[0].scrollIntoView();", element) But I have this error : "javascript error: arguments[0].scrollIntoView is not a function". What to I do wrong ? Thanks 回答1: Please use the line of

Selenium: how to handle JNLP issue in Chrome and Python

♀尐吖头ヾ 提交于 2021-02-05 09:30:24
问题 In Chrome (Edge or Firefox) JNLP warning This type of file can harm your computer popups when I try to open web page containing this Java extension with Selenium WebDriver. There are 2 buttons - Keep to allow proceeding and Discard to...discard. The warning forbid any other action because it's probably not possible to allow JNLP and run its installation from browser via Selenium itself. One possible solution is to use different browser (or retired browser like IE) or to use some workaround,

Cannot locate elements using headless mode Selenium

泄露秘密 提交于 2021-02-05 09:29:38
问题 I cannot locate elements using headless mode because of this restriction "All users will have to use google Chrome when accessing our sites." This restriction was added by our admins so users could use only Google chrome. My code is @Test(priority = 1) public void setupApplication() throws IOException { /* * open browser (GoogleChrome) and enter user credentials */ ChromeOptions options = new ChromeOptions(); options.addArguments("--window-size=1920,1080"); options.addArguments("--disable-gpu