aiohttp

Does aiohttp have ORM?

£可爱£侵袭症+ 提交于 2019-12-05 03:33:48
There is relatively new Python 3 aiohttp library that contains client/server. Does it contain ORM? If not - is it possible to use it with 3-rd party ORM? If not possible - for which purpose it could be used? I don't mean that app could not be written without ORM, but major waste of Python Frameworks support it and developers are used to such style of programming. Short answer -- aiohttp has no ORM yet. You can use SQLAlchemy-like queries for aiopg driver, see example The same is available for aiomysql. The support is not full-fledged Object-Relational Mapping but only helpers for making SQL

Python asyncio/aiohttp: ValueError: too many file descriptors in select() on Windows

穿精又带淫゛_ 提交于 2019-12-04 11:23:01
Hello everyone, I'm having trouble trying to understand asyncio and aiohttp and making both work together properly. Not only I don't properly understand what I'm doing, at this point I've run into a problem that I have no idea how to solve. I'm using Windows 10 64 bits, latest update. The following code returns me a list of pages that do not contain html in the Content-Type in the header using asyncio. import asyncio import aiohttp MAXitems = 30 async def getHeaders(url, session, sema): async with session: async with sema: try: async with session.head(url) as response: try: if "html" in

How to mock aiohttp.client.ClientSession.get async context manager

余生颓废 提交于 2019-12-04 11:00:34
问题 I have some troubles with mocking aiohttp.client.ClientSession.get context manager. I found some articles and here is one example that seems was working: article 1 So my code that I want to test: async_app.py import random from aiohttp.client import ClientSession async def get_random_photo_url(): while True: async with ClientSession() as session: async with session.get('random.photos') as resp: json = await resp.json() photos = json['photos'] if not photos: continue return random.choice

Maximize number of parallel requests (aiohttp)

主宰稳场 提交于 2019-12-03 22:16:14
问题 tl;dr : how do I maximize number of http requests I can send in parallel? I am fetching data from multiple urls with aiohttp library. I'm testing its performance and I've observed that somewhere in the process there is a bottleneck, where running more urls at once just doesn't help. I am using this code: import asyncio import aiohttp async def fetch(url, session): headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:64.0) Gecko/20100101 Firefox/64.0'} try: async with session

Is that benchmark reliable - aiohttp vs requests

可紊 提交于 2019-12-03 20:36:12
We are trying to chose between technologies at my work. And I thought I'd run a benchmark using both libraries (aiohttp and requests). I want it to be as fair / unbiased as possible, and would love a look from the community into this. So this is my current code : import asyncio as aio import aiohttp import requests import time TEST_URL = "https://a-domain-i-can-use.tld" def requests_fetch_url(url): with requests.Session() as session: with session.get(url) as resp: html = resp.text async def aio_fetch_url(url): async with aiohttp.ClientSession() as session: async with session.get(url) as resp:

Making 1 milion requests with aiohttp/asyncio - literally

好久不见. 提交于 2019-12-03 16:33:37
I followed up this tutorial: https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html and everything works fine when I am doing like 50 000 requests. But I need to do 1 milion API calls and then I have problem with this code: url = "http://some_url.com/?id={}" tasks = set() sem = asyncio.Semaphore(MAX_SIM_CONNS) for i in range(1, LAST_ID + 1): task = asyncio.ensure_future(bound_fetch(sem, url.format(i))) tasks.add(task) responses = asyncio.gather(*tasks) return await responses Because Python needs to create 1 milion tasks, it basically just lags and then prints Killed

Is there any way to use aiohttp client with socks proxy?

末鹿安然 提交于 2019-12-03 14:55:20
Looks like aiohttp.ProxyConnector doesn't support socks proxy. Is there any workaround for this? I would be grateful for any advice. Have you tried aiosocks ? import asyncio import aiosocks from aiosocks.connector import SocksConnector conn = SocksConnector(proxy=aiosocks.Socks5Addr(PROXY_ADDRESS, PROXY_PORT), proxy_auth=None, remote_resolve=True) session = aiohttp.ClientSession(connector=conn) async with session.get('http://python.org') as resp: assert resp.status == 200 aiosocks does not work with the newer version 3.+ of aiohttp. You can use aiosocksy to implement socks proxy. To check

“async with” in Python 3.4

大城市里の小女人 提交于 2019-12-03 11:52:56
问题 The Getting Started docs for aiohttp give the following client example: import asyncio import aiohttp async def fetch_page(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: assert response.status == 200 return await response.read() loop = asyncio.get_event_loop() with aiohttp.ClientSession(loop=loop) as session: content = loop.run_until_complete( fetch_page(session, 'http://python.org')) print(content) And they give the following note for Python 3.4 users: If

RuntimeError: There is no current event loop in thread in async + apscheduler

戏子无情 提交于 2019-12-03 03:42:15
问题 I have a async function and need to run in with apscheduller every N minutes. There is a python code below URL_LIST = ['<url1>', '<url2>', '<url2>', ] def demo_async(urls): """Fetch list of web pages asynchronously.""" loop = asyncio.get_event_loop() # event loop future = asyncio.ensure_future(fetch_all(urls)) # tasks to do loop.run_until_complete(future) # loop until done async def fetch_all(urls): tasks = [] # dictionary of start times for each url async with ClientSession() as session: for

“async with” in Python 3.4

不打扰是莪最后的温柔 提交于 2019-12-03 02:30:11
The Getting Started docs for aiohttp give the following client example: import asyncio import aiohttp async def fetch_page(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: assert response.status == 200 return await response.read() loop = asyncio.get_event_loop() with aiohttp.ClientSession(loop=loop) as session: content = loop.run_until_complete( fetch_page(session, 'http://python.org')) print(content) And they give the following note for Python 3.4 users: If you are using Python 3.4, please replace await with yield from and async def with a @coroutine decorator