Python Splinter (SeleniumHQ) how to take a screenshot of many webpages? [Connection refused]

久未见 提交于 2019-12-03 20:22:33

Your problem is you do browser.quit() inside of your loop through the URLs, so it is no longer open for the second URL.

Here's an updated version of your code:

from splinter import Browser
import socket

urls = ['http://ubuntu.com/', 'http://xubuntu.org/']

browser = None    
try:
    browser = Browser('firefox')
    for i, url in enumerate(urls, start=1):
        try:
            browser.visit(url)
            if browser.status_code.is_success():
                browser.driver.save_screenshot('your_screenshot_%03d.png' % i)
        except socket.gaierror, e:
            print "URL not found: %s" % url
finally:
    if browser is not None:
        browser.quit()

The major change is moving the browser.quit() code into your main exception handler's finally, so that it'll happen no matter what goes wrong. Note also the use of enumerate to provide both the iterator value and its index; this is the recommend approach in Python over maintaining your own index pointer.

I'm not sure if it's relevant for your code, but I found splinter raised socket.gaierror exceptions over urllib2.URLError, so I showed how you could trap them as well. I moved this exception handler inside of the loop; this will continue to grab the remaining screenshots even if one or more of the URLs are non-existent.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!