Fetching multiple urls with aiohttp in Python 3.5

前端 未结 1 771
予麋鹿
予麋鹿 2020-12-13 01:10

Since Python 3.5 introduced async with the syntax recommended in the docs for aiohttp has changed. Now to get a single url they suggest:

import          


        
相关标签:
1条回答
  • 2020-12-13 01:39

    For parallel execution you need an asyncio.Task

    I've converted your example to concurrent data fetching from several sources:

    import aiohttp
    import asyncio
    
    async def fetch(session, url):
        async with session.get(url) as response:
            if response.status != 200:
                response.raise_for_status()
            return await response.text()
    
    async def fetch_all(session, urls):
        tasks = []
        for url in urls:
            task = asyncio.create_task(fetch(session, url))
            tasks.append(task)
        results = await asyncio.gather(*tasks)
        return results
    
    async def main():    
        urls = ['http://cnn.com',
                'http://google.com',
                'http://twitter.com']
        async with aiohttp.ClientSession() as session:
            htmls = await fetch_all(session, urls)
            print(htmls)
    
    if __name__ == '__main__':
        asyncio.run(main())
    
    0 讨论(0)
提交回复
热议问题