aiohttp

Learning asyncio: “coroutine was never awaited” warning error

落爺英雄遲暮 提交于 2020-01-23 01:12:05
问题 I am trying to learn to use asyncio in Python to optimize scripts. My example returns a coroutine was never awaited warning, can you help to understand and find how to solve it? import time import datetime import random import asyncio import aiohttp import requests def requete_bloquante(num): print(f'Get {num}') uid = requests.get("https://httpbin.org/uuid").json()['uuid'] print(f"Res {num}: {uid}") def faire_toutes_les_requetes(): for x in range(10): requete_bloquante(x) print("Bloquant : ")

Making 1 milion requests with aiohttp/asyncio - literally

喜夏-厌秋 提交于 2020-01-22 15:11:06
问题 I followed up this tutorial: https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html and everything works fine when I am doing like 50 000 requests. But I need to do 1 milion API calls and then I have problem with this code: url = "http://some_url.com/?id={}" tasks = set() sem = asyncio.Semaphore(MAX_SIM_CONNS) for i in range(1, LAST_ID + 1): task = asyncio.ensure_future(bound_fetch(sem, url.format(i))) tasks.add(task) responses = asyncio.gather(*tasks) return await

asyncio web scraping 101: fetching multiple urls with aiohttp

匆匆过客 提交于 2020-01-19 06:59:14
问题 In earlier question, one of authors of aiohttp kindly suggested way to fetch multiple urls with aiohttp using the new async with syntax from Python 3.5 : import aiohttp import asyncio async def fetch(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: return await response.text() async def fetch_all(session, urls, loop): results = await asyncio.wait([loop.create_task(fetch(session, url)) for url in urls]) return results if __name__ == '__main__': loop = asyncio

asyncio web scraping 101: fetching multiple urls with aiohttp

浪尽此生 提交于 2020-01-19 06:59:05
问题 In earlier question, one of authors of aiohttp kindly suggested way to fetch multiple urls with aiohttp using the new async with syntax from Python 3.5 : import aiohttp import asyncio async def fetch(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: return await response.text() async def fetch_all(session, urls, loop): results = await asyncio.wait([loop.create_task(fetch(session, url)) for url in urls]) return results if __name__ == '__main__': loop = asyncio

Can I use asyncio.wait_for() as a context manager?

爷,独闯天下 提交于 2020-01-14 14:21:47
问题 Why wouldn't this work: try: async with asyncio.wait_for(aiohttp.get(url), 2) as resp: print(resp.text()) except asyncio.TimeoutError as e: pass Gives async with asyncio.wait_for(aiohttp.get(url), 2) as resp: AttributeError: __aexit__ To my understanding, asyncio.wait_for() would pass the future of aiohttp.get() , which has an __aenter__ and __aexit__ method (as is demonstrated by the fact that async with aiohttp.get() works). 回答1: You cannot write async with wait_for(...) -- wait_for doesn't

asynchronous slower than synchronous

醉酒当歌 提交于 2020-01-14 03:38:11
问题 My program does the following: take folder of txt files for each file: read the file do POST request to an API in localhost using file content parse XML response (not in the example below) I was concerned with performance of synchronous version of the program so tried to use aiohttp to make it asynchronous (it's my first attempt of async programming in Python besides Scrapy). It turned out that the async code took 2 times longer and I don't understand why. SYNCHRONOUS CODE (152 seconds) url =

How paginate through api response asynchronously with asyncio and aiohttp

孤街浪徒 提交于 2020-01-14 03:15:28
问题 I'm trying to make api calls with python asynchronously. I have multiple endpoints in a list and each endpoint will return paginated results. I'm able to set up going though the multiple endpoints asynchronously, however am not able to return the paginated results of each endpoint. From debugging, I found that the fetch_more() function runs the while loop, but doesn't actually get past the async with session.get(). So basically. The function fetch_more() is intended to get remaining results

python3.7 aiohttp is slow

陌路散爱 提交于 2020-01-13 06:26:14
问题 import asyncio import aiohttp import socket def _create_loop(): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop = asyncio.get_event_loop() return loop async def _create_tasks(loop, URLs, func): connector = aiohttp.TCPConnector(limit=200, limit_per_host=200, force_close=True, enable_cleanup_closed=True, family=socket.AF_INET, verify_ssl=False) async with aiohttp.ClientSession(loop=loop, connector=connector) as session: semaphore = asyncio.Semaphore(200) async with semaphore:

python3.7 aiohttp is slow

北城以北 提交于 2020-01-13 06:26:05
问题 import asyncio import aiohttp import socket def _create_loop(): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop = asyncio.get_event_loop() return loop async def _create_tasks(loop, URLs, func): connector = aiohttp.TCPConnector(limit=200, limit_per_host=200, force_close=True, enable_cleanup_closed=True, family=socket.AF_INET, verify_ssl=False) async with aiohttp.ClientSession(loop=loop, connector=connector) as session: semaphore = asyncio.Semaphore(200) async with semaphore:

uvloop http

大兔子大兔子 提交于 2020-01-07 14:42:27
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> import time from statistics import mean, stdev import asyncio import uvloop import aiohttp urls = [ 'https://aws.amazon.com', 'https://google.com', 'https://microsoft.com', 'https://www.oracle.com/index.html' 'https://www.python.org', 'https://nodejs.org', 'https://angular.io', 'https://www.djangoproject.com', 'https://reactjs.org', 'https://www.mongodb.com', 'https://reinvent.awsevents.com', 'https://kafka.apache.org', 'https://github.com', 'https://slack.com', 'https://authy.com', 'https://cnn.com', 'https://fox.com', 'https://nbc.com', 'https://www