python-asyncio

Sequential version of asyncio.gather

99封情书 提交于 2020-05-17 06:22:08
问题 I tried to create a method similar to asyncio.gather, but which will execute the list of tasks sequentially and not asynchronously: async def in_sequence(*tasks): """Executes tasks in sequence""" for task in tasks: await task Next this method was supposed to be used like this: async def some_work(work_name): """Do some work""" print(f"Start {work_name}") await asyncio.sleep(1) if raise_exception: raise RuntimeError(f"{work_name} raise an exception") print(f"Finish {work_name}") async def main

How to await a coroutine with an event loop

喜夏-厌秋 提交于 2020-05-14 03:28:39
问题 I have a coroutine that has an event loop. How should I await that coroutine? I am using Python 3.7.3. # The coroutine async def main(): # # This code written using apscheduler.schedulers.asyncio.AsyncIOScheduler # which has its *event loop* to run scheduled tasks # # However, this code will await another coroutine, e.g., # redis = await aioredis.create_redis('/tmp/redis.sock') try: asyncio.get_event_loop().run_forever() except: pass if __name__ == '__main__': asyncio.run(main()) The code

aiohttp: rate limiting requests-per-second by domain

Deadly 提交于 2020-05-13 06:04:38
问题 I am writing a web crawler that is running parallel fetches for many different domains. I want to limit the number of requests-per-second that are made to each individual domain , but I do not care about the total number of connections that are open, or the total requests per second that are made across all domains. I want to maximize the number of open connections and requests-per-second overall, while limiting the number of requests-per-second made to individual domains. All of the

asyncio event loop equivalent with uvloop

好久不见. 提交于 2020-04-30 10:22:38
问题 I have an event loop with coroutine method using asyncio . I enthusiast to looking for an equivalent of the following example using uvloop instead. Here's a simple asyncio event loop example: import asyncio async def read(**kwargs): oid = kwargs.get('oid', '0.0.0.0.0.0') time = kwargs.get('time', 1) try: print('start: ' + oid) except Exception as exc: print(exc) finally: await asyncio.sleep(time) print('terminate: ' + oid) def event_loop(configs): loop = asyncio.get_event_loop() for conf in

asyncio event loop equivalent with uvloop

Deadly 提交于 2020-04-30 10:22:05
问题 I have an event loop with coroutine method using asyncio . I enthusiast to looking for an equivalent of the following example using uvloop instead. Here's a simple asyncio event loop example: import asyncio async def read(**kwargs): oid = kwargs.get('oid', '0.0.0.0.0.0') time = kwargs.get('time', 1) try: print('start: ' + oid) except Exception as exc: print(exc) finally: await asyncio.sleep(time) print('terminate: ' + oid) def event_loop(configs): loop = asyncio.get_event_loop() for conf in

Aiohttp logging: how to distinguish log messages of different requests?

…衆ロ難τιáo~ 提交于 2020-04-29 18:55:25
问题 Imagine I have this web application based on Aiohttp: from aiohttp import web import asyncio import logging logger = logging.getLogger(__name__) async def hello(request): logger.info('Started processing request') await asyncio.sleep(1) logger.info('Doing something') await asyncio.sleep(1) return web.Response(text="Hello, world!\n") logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(name)-14s %(levelname)s: %(message)s') app = web.Application() app.add_routes([web.get('/', hello)]

How to properly terminate nested asyncio-tasks

倖福魔咒の 提交于 2020-04-18 05:48:04
问题 I here asked how I can make a version of asyncio.gather that executes the list of tasks sequentially and not in parallel, and good people told me how. But my joy ended as soon as I tried to put two in_sequence() methods into one another: import asyncio from typing import Coroutine async def work_a(): print("Work 'A' start") await asyncio.sleep(0) print("Work 'A' finish") async def work_b(): print("Work 'B' start") await asyncio.sleep(0) print("Work 'B' finish") async def raise_exception():

How to download multiple files using asyncio and wget in python?

倾然丶 夕夏残阳落幕 提交于 2020-04-18 05:46:14
问题 I want to download many files from dukaskopy. A typical url looks like this. url = 'http://datafeed.dukascopy.com/datafeed/AUDUSD/2014/01/02/00h_ticks.bi5' I tried the answer here but most of the files are of size 0. But when I simply looped using wget(see below), I got complete files. import wget from urllib.error import HTTPError pair = 'AUDUSD' for year in range(2014,2015): for month in range(1,13): for day in range(1,32): for hour in range(24): try: url = 'http://datafeed.dukascopy.com

Asynchronous python requests.post()

♀尐吖头ヾ 提交于 2020-04-14 06:19:32
问题 So the idea is to collect responses for 1 million queries and store them in a dictionary. I want it to be asynchronous because requests.post takes 1 second for each query and I want to keep the loop going while it's wait for response. After some research I have something like this. async def get_response(id): query_json = id2json_dict[id] response = requests.post('some_url', json = query_json, verify=false) return eval(response.text) async def main(id_list): for unique_id in id_list:

Handling Timeouts with asyncio

馋奶兔 提交于 2020-04-11 10:07:53
问题 Disclaimer: this is my first time experimenting with the asyncio module. I'm using asyncio.wait in the following manner to try to support a timeout feature waiting for all results from a set of async tasks. This is part of a larger library so I'm omitting some irrelevant code. Note that the library already supports submitting tasks and using timeouts with ThreadPoolExecutors and ProcessPoolExecutors, so I'm not really interested in suggestions to use those instead or questions about why I'm