Logging HTML requests in robot framework

喜你入骨 提交于 2019-11-30 16:11:09

Selenium is only emulating user behaviour, so it does not help you here. You could use a proxy that logs all the traffic and lets you examine the traffic. BrowserMob Proxy let's you do that. See Create Webdriver from Selenium2Libray on how to configure proxy for your browser.

This way you can ask your proxy to return the traffic after you noticed a failure in you test.

I have implemented same thing using BrowserMobProxy. It captures network traffic based on the test requirement.

First function CaptureNetworkTraffic(), will open the browser with configuration provided in the parameters.

Second function Parse_Request_Response(), will get the HAR file from above function and return resp. network data based on the parameter configured.

e.g.

print Capture_Request_Response("g:\\har.txt","google.com",True,True,False,False,False)

In this case, it will check url with "google.com" and returns response and request headers for the url.

    from browsermobproxy import Server
    from selenium import webdriver
    import json

    def CaptureNetworkTraffic(url,server_ip,headers,file_path):
    ''' 
    This function can be used to capture network traffic from the browser.                       Using this function we can capture header/cookies/http calls made from the      browser
url - Page url
    server_ip - remap host to for specific URL
    headers - this is a dictionary of the headers to be set
    file_path - File in which HAR gets stored
    '''
    port = {'port':9090}
    server = Server("G:\\browsermob\\bin\\browsermob-proxy",port) #Path to      the BrowserMobProxy
    server.start()
    proxy = server.create_proxy()
    proxy.remap_hosts("www.example.com",server_ip)
    proxy.remap_hosts("www.example1.com",server_ip)
    proxy.remap_hosts("www.example2.com",server_ip)
    proxy.headers(headers)
    profile  = webdriver.FirefoxProfile()
    profile.set_proxy(proxy.selenium_proxy())
    driver = webdriver.Firefox(firefox_profile=profile)
    new = {'captureHeaders':'True','captureContent':'True'}
    proxy.new_har("google",new)
    driver.get(url)
    proxy.har # returns a HAR JSON blob
    server.stop()
    driver.quit()
    file1 = open(file_path,'w')
    json.dump(proxy.har,file1)
    file1.close()


def Parse_Request_Response(filename,url,response=False,request_header=False,request_cookies=False,response_header=False,response_cookies=False):
resp ={}
har_data = open(filename, 'rb').read()
har = json.loads(har_data)
for i in har['log']['entries']:
    if url in i['request']['url']:
        resp['request'] = i['request']['url']
        if response:
            resp['response'] = i['response']['content']
        if request_header:
            resp['request_header'] = i['request']['headers']
        if request_cookies:
            resp['request_cookies'] = i['request']['cookies']
        if response_header:
            resp['response_header'] = i['response']['headers']
        if response_cookies:
            resp['response_cookies'] = i['response']['cookies']
return resp


  if (__name__=="__main__"):
  headers = {"User-Agent":"Mozilla/5.0 (iPad; CPU OS 5_0 like Mac OS X)  AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9A334 Safari/7534.48.3"}

    CaptureNetworkTraffic("http://www.google.com","192.168.1.1",headers,"g:\\har.txt")
    print Parse_Request_Response("g:\\har.txt","google.com",False,True,False,False,False)
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!