I am trying to properly understand and implement two concurrently running Task objects using Python 3\'s relatively new asyncio module.
In a nutshell, asyncio seems
You don't necessarily need a yield from x
to give control over to the event loop.
In your example, I think the proper way would be to do a yield None
or equivalently a simple yield
, rather than a yield from asyncio.sleep(0.001)
:
import asyncio
@asyncio.coroutine
def say_boo():
i = 0
while True:
yield None
print("...boo {0}".format(i))
i += 1
@asyncio.coroutine
def say_baa():
i = 0
while True:
yield
print("...baa {0}".format(i))
i += 1
boo_task = asyncio.async(say_boo())
baa_task = asyncio.async(say_baa())
loop = asyncio.get_event_loop()
loop.run_forever()
Coroutines are just plain old Python generators.
Internally, the asyncio
event loop keeps a record of these generators and calls gen.send()
on each of them one by one in a never ending loop. Whenever you yield
, the call to gen.send()
completes and the loop can move on. (I'm simplifying it; take a look around https://hg.python.org/cpython/file/3.4/Lib/asyncio/tasks.py#l265 for the actual code)
That said, I would still go the run_in_executor
route if you need to do CPU intensive computation without sharing data.