aiohttp

ssl/asyncio: traceback even when error is handled

守給你的承諾、 提交于 2019-11-30 08:54:24
Trying to download and process jpegs from URLs. My issue isn't that certificate verification fails for some URLs, as these URLs are old and may no longer be trustworthy, but that when I try...except... the SSLCertVerificationError , I still get the traceback. System: Linux 4.17.14-arch1-1-ARCH, python 3.7.0-3, aiohttp 3.3.2 Minimal example: import asyncio import aiohttp from ssl import SSLCertVerificationError async def fetch_url(url, client): try: async with client.get(url) as resp: print(resp.status) print(await resp.read()) except SSLCertVerificationError as e: print('Error handled') async

asyncio aiohttp progress bar with tqdm

不羁的心 提交于 2019-11-30 03:42:56
I'm attempting to integrate a tqdm progress bar to monitor POST requests generated with aiohttp in Python 3.5. I have a working progress bar but can't seem to gather results using as_completed() . Pointers gratefully received. Examples I've found suggest using the following pattern, which is incompatible with Python 3.5 async def definitions: for f in tqdm.tqdm(asyncio.as_completed(tasks), total=len(coros)): yield from f Working (albeit redacted) async code without the progress bar: def async_classify(records): async def fetch(session, name, sequence): url = 'https://app.example.com/api/v0

How to use an aiohttp ClientSession with Sanic?

梦想与她 提交于 2019-11-29 23:48:52
问题 I am trying to understand what is the right way to use aiohttp with Sanic. From aiohttp documentation, I find the following: Don’t create a session per request . Most likely you need a session per application which performs all requests altogether. More complex cases may require a session per site, e.g. one for Github and another one for Facebook APIs. Anyway making a session for every request is a very bad idea. A session contains a connection pool inside. Connection reuse and keep-alive

Asyncio RuntimeError: Event Loop is Closed

陌路散爱 提交于 2019-11-29 09:20:41
I'm trying to make a bunch of requests (~1000) using Asyncio and the aiohttp library, but I am running into a problem that I can't find much info on. When I run this code with 10 urls, it runs just fine. When I run it with 100+ urls, it breaks and gives me RuntimeError: Event loop is closed error. import asyncio import aiohttp @asyncio.coroutine def get_status(url): code = '000' try: res = yield from asyncio.wait_for(aiohttp.request('GET', url), 4) code = res.status res.close() except Exception as e: print(e) print(code) if __name__ == "__main__": urls = ['https://google.com/'] * 100 coros =

Why doesn't asyncio always use executors?

ⅰ亾dé卋堺 提交于 2019-11-29 08:04:36
I have to send a lot of HTTP requests, once all of them have returned, the program can continue. Sounds like a perfect match for asyncio . A bit naively, I wrapped my calls to requests in an async function and gave them to asyncio . This doesn't work. After searching online, I found two solutions: use a library like aiohttp , which is made to work with asyncio wrap the blocking code in a call to run_in_executor To understand this better, I wrote a small benchmark. The server-side is a flask program that waits 0.1 seconds before answering a request. from flask import Flask import time app =

aiohttp: rate limiting parallel requests

喜欢而已 提交于 2019-11-29 04:17:04
APIs often have rate limits that users have to follow. As an example let's take 50 requests/second. Sequential requests take 0.5-1 second and thus are too slow to come close to that limit. Parallel requests with aiohttp, however, exceed the rate limit. To poll the API as fast as allowed, one needs to rate limit parallel calls. Examples that I found so far decorate session.get , approximately like so: session.get = rate_limited(max_calls_per_second)(session.get) This works well for sequential calls. Trying to implement this in parallel calls does not work as intended. Here's some code as

Fetching multiple urls with aiohttp in Python 3.5

▼魔方 西西 提交于 2019-11-28 21:37:41
Since Python 3.5 introduced async with the syntax recommended in the docs for aiohttp has changed. Now to get a single url they suggest: import aiohttp import asyncio async def fetch(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: return await response.text() if __name__ == '__main__': loop = asyncio.get_event_loop() with aiohttp.ClientSession(loop=loop) as session: html = loop.run_until_complete( fetch(session, 'http://python.org')) print(html) How can I modify this to fetch a collection of urls instead of just one url? In the old asyncio examples you would

asyncio web scraping 101: fetching multiple urls with aiohttp

混江龙づ霸主 提交于 2019-11-28 05:28:50
In earlier question, one of authors of aiohttp kindly suggested way to fetch multiple urls with aiohttp using the new async with syntax from Python 3.5 : import aiohttp import asyncio async def fetch(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: return await response.text() async def fetch_all(session, urls, loop): results = await asyncio.wait([loop.create_task(fetch(session, url)) for url in urls]) return results if __name__ == '__main__': loop = asyncio.get_event_loop() # breaks because of the first url urls = ['http://SDFKHSKHGKLHSKLJHGSDFKSJH.com', 'http

Asyncio RuntimeError: Event Loop is Closed

痞子三分冷 提交于 2019-11-28 02:48:30
问题 I'm trying to make a bunch of requests (~1000) using Asyncio and the aiohttp library, but I am running into a problem that I can't find much info on. When I run this code with 10 urls, it runs just fine. When I run it with 100+ urls, it breaks and gives me RuntimeError: Event loop is closed error. import asyncio import aiohttp @asyncio.coroutine def get_status(url): code = '000' try: res = yield from asyncio.wait_for(aiohttp.request('GET', url), 4) code = res.status res.close() except

Why doesn't asyncio always use executors?

最后都变了- 提交于 2019-11-28 01:52:19
问题 I have to send a lot of HTTP requests, once all of them have returned, the program can continue. Sounds like a perfect match for asyncio . A bit naively, I wrapped my calls to requests in an async function and gave them to asyncio . This doesn't work. After searching online, I found two solutions: use a library like aiohttp, which is made to work with asyncio wrap the blocking code in a call to run_in_executor To understand this better, I wrote a small benchmark. The server-side is a flask