concurrent.futures

asyncio yield from concurrent.futures.Future of an Executor

孤街醉人 提交于 2019-12-05 00:42:21
I have a long_task function which runs a heavy cpu-bound calculation and I want to make it asynchronous by using the new asyncio framework. The resulting long_task_async function uses a ProcessPoolExecutor to offload work to a different process to not be constrained by the GIL. The trouble is that for some reason the concurrent.futures.Future instance returned from ProcessPoolExecutor.submit when yielded from throws a TypeError . Is this by design? Are those futures not compatible with asyncio.Future class? What would be a workaround? I also noticed that generators are not picklable so

CompletableFuture is not getting executed. If I use the ExecutorService pool its work as expected but not with the default forkJoin common pool

亡梦爱人 提交于 2019-12-04 09:13:32
I am trying to run the following class its getting terminated without executing the CompletableFuture. public class ThenApplyExample { public static void main(String[] args) throws Exception { //ExecutorService es = Executors.newCachedThreadPool(); CompletableFuture<Student> studentCompletableFuture = CompletableFuture.supplyAsync(() -> { try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } return 3; })// If I put executorservice created n commented above, programme work as expected. .thenApply(i -> { for (int j = 0; j <= i; j++) { System.out.println(

multiprocessing queue full

萝らか妹 提交于 2019-12-04 01:38:53
I'm using concurrent.futures to implement multiprocessing. I am getting a queue.Full error, which is odd because I am only assigning 10 jobs. A_list = [np.random.rand(2000, 2000) for i in range(10)] with ProcessPoolExecutor() as pool: pool.map(np.linalg.svd, A_list) error: Exception in thread Thread-9: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/threading.py", line 921, in _bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/threading.py", line 869, in run self._target(*self._args, **self.

How to spawn future only if free worker is available

ε祈祈猫儿з 提交于 2019-12-03 23:17:47
问题 I am trying to send information extracted from lines of a big file to a process running on some server. To speed this up, I would like to do this with some threads in parallel. Using the Python 2.7 backport of concurrent.futures I tried this: f = open("big_file") with ThreadPoolExecutor(max_workers=4) as e: for line in f: e.submit(send_line_function, line) f.close() However, this is problematic, because all futures get submitted instantly, so that my machine runs out of memory, because the

Use tqdm with concurrent.futures?

ⅰ亾dé卋堺 提交于 2019-12-03 22:27:18
I have a multithreaded function that I would like a status bar for using tqdm . Is there an easy way to show a status bar with ThreadPoolExecutor ? It is the parallelization part that is confusing me. import concurrent.futures def f(x): return f**2 my_iter = range(1000000) def run(f,my_iter): with concurrent.futures.ThreadPoolExecutor() as executor: function = list(executor.map(f, my_iter)) return results run(f, my_iter) # wrap tqdr around this function? You can wrap tqdm around the executor as the following to track the progress: list(tqdm(executor.map(f, iter), total=len(iter)) Here is your

Getting original line number for exception in concurrent.futures

拜拜、爱过 提交于 2019-12-03 06:59:54
问题 Example of using concurrent.futures (backport for 2.7): import concurrent.futures # line 01 def f(x): # line 02 return x * x # line 03 data = [1, 2, 3, None, 5] # line 04 with concurrent.futures.ThreadPoolExecutor(len(data)) as executor: # line 05 futures = [executor.submit(f, n) for n in data] # line 06 for future in futures: # line 07 print(future.result()) # line 08 Output: 1 4 9 Traceback (most recent call last): File "C:\test.py", line 8, in <module> print future.result() # line 08 File

Getting original line number for exception in concurrent.futures

蹲街弑〆低调 提交于 2019-12-02 20:38:24
Example of using concurrent.futures (backport for 2.7): import concurrent.futures # line 01 def f(x): # line 02 return x * x # line 03 data = [1, 2, 3, None, 5] # line 04 with concurrent.futures.ThreadPoolExecutor(len(data)) as executor: # line 05 futures = [executor.submit(f, n) for n in data] # line 06 for future in futures: # line 07 print(future.result()) # line 08 Output: 1 4 9 Traceback (most recent call last): File "C:\test.py", line 8, in <module> print future.result() # line 08 File "C:\dev\Python27\lib\site-packages\futures-2.1.4-py2.7.egg\concurrent\futures\_base.py", line 397, in

Scala Futures: Default error handler for every new created, or mapped exception

跟風遠走 提交于 2019-12-01 19:55:31
Is there any possibility to always create a Future{...} block with an default onFailure handler? (e.g. write the stacktrace to the console)? This handler should also be automatically attached to mapped futures (new futures created by calling map on an future already having a default failure handler) See also my question here for more details: Scala on Android with scala.concurrent.Future do not report exception on system err/out I want to have a "last resort" exception logging code, if someone does not use onFailure or sth similar on a returned future. I had a similar problem, futures failing

How to pass a function with more than one argument to python concurrent.futures.ProcessPoolExecutor.map()?

余生长醉 提交于 2019-12-01 03:03:28
I would like concurrent.futures.ProcessPoolExecutor.map() to call a function consisting of 2 or more arguments. In the example below, I have resorted to using a lambda function and defining ref as an array of equal size to numberlist with an identical value. 1st Question: Is there a better way of doing this? In the case where the size of numberlist can be million to billion elements in size, hence ref size would have to follow numberlist, this approach unnecessarily takes up precious memory, which I would like to avoid. I did this because I read the map function will terminate its mapping

Individual timeouts for concurrent.futures

瘦欲@ 提交于 2019-12-01 02:53:56
I see two ways to specify timeouts in concurrent.futures . as_completed() wait() Both methods handle N running futures. I would like to specify an individual timeout for each future. Use Case: Future for getting data from DB has a timeout of 0.5 secs. Future for getting data from a HTTP server has a timeout of 1.2 secs. How do I handle this with concurrent.futures ? Or is this library not the right tool? Conclusion AFAIK the solution by mdurant is a good work-around. I think I will use a different library the next time. Maybe asyncio has better support for this. See: https://docs.python.org/3