coroutine

Fibers in C#: are they faster than iterators, and have people used them?

好久不见. 提交于 2019-12-08 22:59:00
问题 So I was chatting with a colleague about fibers and turned up this paper from 2003 that describes a implementation of coroutines in C# using the Fiber API. The implementation of Yield in this paper was for .NET 1.1, so it predates the yield return syntax that appeared in .NET 2.0. It definitely looks, at first glance, that the implementation here is potentially faster and could scale across multiple CPUs rather well. Has anyone used it? 回答1: I haven't used it, but I have an interest in the

Error calling Dispatchers.setMain() in unit test

可紊 提交于 2019-12-08 17:33:41
问题 Have started to try to use kotlinx-coroutines-test (https://github.com/Kotlin/kotlinx.coroutines/blob/master/core/kotlinx-coroutines-test/README.md) in JUnit unit test but getting following error when i call Dispatchers.setMain() java.lang.IllegalArgumentException: TestMainDispatcher is not set as main dispatcher, have Main[missing, cause=java.lang.AbstractMethodError: kotlinx.coroutines.test.internal.TestMainDispatcherFactory.createDispatcher()Lkotlinx/coroutines/MainCoroutineDispatcher;]

Why should I use coroutine in C/C++

风格不统一 提交于 2019-12-08 12:45:49
问题 this image comes from Practical usage of setjmp and longjmp in C. From my understanding, the coroutine is two process looks like doing parallelly for human but actually doing a single process for machine . But using setjmp & longjmp I feel very hard to read the code. If need to write the same one. For example process A & B, I will give serval States to the processes to split them into different pieces(states), do sequentially like: Process A switch (state) case A1: if (A1 is done) do B1 break

Combine tornado gen.coroutine and joblib mem.cache decorators

流过昼夜 提交于 2019-12-08 12:37:13
问题 Imagine having a function, which handles a heavy computational job, that we wish to execute asynchronously in a Tornado application context. Moreover, we would like to lazily evaluate the function, by storing its results to the disk, and not rerunning the function twice for the same arguments. Without caching the result (memoization) one would do the following: def complex_computation(arguments): ... return result @gen.coroutine def complex_computation_caller(arguments): ... result = complex

Lua - Threading

风流意气都作罢 提交于 2019-12-08 11:04:28
问题 In the following code i read values from a device, add a timestamp to it and send the string via e-mail. The function "send_email()" needs 3 minutes and stops the rest of the code from working. So my aim is to execute the function "send_email()" on another thread or similar, so that there is no gap of 3 minutes between the collected datasets. Because in this time no new data will be received, but i need to collect all data. It should give out: value_10:30:00 -> value_10:30:10 -> value_10:30

Coroutines in numba

夙愿已清 提交于 2019-12-07 10:39:24
问题 I'm working on something that requires fast coroutines and I believe numba could speed up my code. Here's a silly example: a function that squares its input, and adds to it the number of times its been called. def make_square_plus_count(): i = 0 def square_plus_count(x): nonlocal i i += 1 return x**2 + i return square_plus_count You can't even nopython=False JIT this, presumably due to the nonlocal keyword. But you don't need nonlocal if you use a class instead: def make_square_plus_count():

Python 3 asyncio - yield from vs asyncio.async stack usage

一个人想着一个人 提交于 2019-12-07 10:00:00
问题 I'm evaluating different patterns for periodic execution (actual sleep/delays ommited for brevity) using the Python 3 asyncio framework, and I have two pieces of code that behave diffrently and I can't explain why. The first version, which uses yield from to call itself recursively exhausts the stack in about 1000 iterations, as I expected. The second version calls the coroutine recursively, but delegates actual event loop execution to asyncio.async and doesn't exhaust the stack. Can you

asyncio as_yielded from async generators

你。 提交于 2019-12-06 22:14:43
问题 I'm looking to be able to yield from a number of async coroutines. Asyncio's as_completed is kind of close to what I'm looking for (i.e. I want any of the coroutines to be able to yield at any time back to the caller and then continue), but that only seems to allow regular coroutines with a single return. Here's what I have so far: import asyncio async def test(id_): print(f'{id_} sleeping') await asyncio.sleep(id_) return id_ async def test_gen(id_): count = 0 while True: print(f'{id_}

How do I kill a task / coroutine in Julia?

狂风中的少年 提交于 2019-12-06 17:46:28
问题 using HttpServer http = HttpHandler() do request::Request, response::Response show(request) Response("Hello there") end http.events["error"] = (client, error) -> println(error) http.events["listen"] = (port) -> println("Listening on $port") server = Server(http) t = @async run(server, 3000) This starts a simple little web server asynchronously. The problem is I have no idea how to stop it. I've been going through the Julia documentation and trying to find some function that will remove this

CoroutineExceptionHandler not executed when provided as launch context

旧城冷巷雨未停 提交于 2019-12-06 06:41:19
问题 When I run this: fun f() = runBlocking { val eh = CoroutineExceptionHandler { _, e -> trace("exception handler: $e") } val j1 = launch(eh) { trace("launched") delay(1000) throw RuntimeException("error!") } trace("joining") j1.join() trace("after join") } f() This is output: [main @coroutine#1]: joining [main @coroutine#2]: launched java.lang.RuntimeException: error! at ExceptionHandling$f9$1$j1$1.invokeSuspend(ExceptionHandling.kts:164) at kotlin.coroutines.jvm.internal.BaseContinuationImpl