Why doesn't requests.get() return? What is the default timeout that requests.get() uses?

后端 未结 6 1854
时光说笑
时光说笑 2020-11-28 05:09

In my script, requests.get never returns:

import requests

print (\"requesting..\")

# This call never returns!
r = requests.get(
    \"http://w         


        
6条回答
  •  一整个雨季
    2020-11-28 05:45

    Reviewed all the answers and came to conclusion that the problem still exists. On some sites requests may hang infinitely and using multiprocessing seems to be overkill. Here's my approach(Python 3.5+):

    import asyncio
    
    import aiohttp
    
    
    async def get_http(url):
        async with aiohttp.ClientSession(conn_timeout=1, read_timeout=3) as client:
            try:
                async with client.get(url) as response:
                    content = await response.text()
                    return content, response.status
            except Exception:
                pass
    
    
    loop = asyncio.get_event_loop()
    task = loop.create_task(get_http('http://example.com'))
    loop.run_until_complete(task)
    result = task.result()
    if result is not None:
        content, status = task.result()
        if status == 200:
            print(content)
    

    UPDATE

    If you receive a deprecation warning about using conn_timeout and read_timeout, check near the bottom of THIS reference for how to use the ClientTimeout data structure. One simple way to apply this data structure per the linked reference to the original code above would be:

    async def get_http(url):
        timeout = aiohttp.ClientTimeout(total=60)
        async with aiohttp.ClientSession(timeout=timeout) as client:
            try:
                etc.
    

提交回复
热议问题