process-pool

Starmap combined with tqdm?

僤鯓⒐⒋嵵緔 提交于 2019-12-01 18:24:41
I am doing some parallel processing, as follows: with mp.Pool(8) as tmpPool: results = tmpPool.starmap(my_function, inputs) where inputs look like: [(1,0.2312),(5,0.52) ...] i.e., tuples of an int and a float. The code runs nicely, yet I cannot seem to wrap it around a loading bar (tqdm), such as can be done with e.g., imap method as follows: tqdm.tqdm(mp.imap(some_function,some_inputs)) Can this be done for starmap also? Thanks! It's not possible with starmap() , but it's possible with a patch adding Pool.istarmap() . It's based on the code for imap() . All you have to do, is create the

Why do concurrent.futures.ProcessPoolExecutor and multiprocessing.pool.Pool fail with super in Python?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-01 05:59:09
Why does the following Python code using the concurrent.futures module hang forever? import concurrent.futures class A: def f(self): print("called") class B(A): def f(self): executor = concurrent.futures.ProcessPoolExecutor(max_workers=2) executor.submit(super().f) if __name__ == "__main__": B().f() The call raises an invisible exception [Errno 24] Too many open files (to see it, replace the line executor.submit(super().f) with print(executor.submit(super().f).exception()) ). However, replacing ProcessPoolExecutor with ThreadPoolExecutor prints "called" as expected. Why does the following

Why do concurrent.futures.ProcessPoolExecutor and multiprocessing.pool.Pool fail with super in Python?

亡梦爱人 提交于 2019-12-01 04:02:51
问题 Why does the following Python code using the concurrent.futures module hang forever? import concurrent.futures class A: def f(self): print("called") class B(A): def f(self): executor = concurrent.futures.ProcessPoolExecutor(max_workers=2) executor.submit(super().f) if __name__ == "__main__": B().f() The call raises an invisible exception [Errno 24] Too many open files (to see it, replace the line executor.submit(super().f) with print(executor.submit(super().f).exception()) ). However,

multiprocessing pool not working in nested functions

人走茶凉 提交于 2019-11-29 15:35:10
Following code not executing as expected. import multiprocessing lock = multiprocessing.Lock() def dummy(): def log_results_l1(results): lock.acquire() print("Writing results", results) lock.release() def mp_execute_instance_l1(cmd): print(cmd) return cmd cmds = [x for x in range(10)] pool = multiprocessing.Pool(processes=8) for c in cmds: pool.apply_async(mp_execute_instance_l1, args=(c, ), callback=log_results_l1) pool.close() pool.join() print("done") dummy() But it does work if the functions are not nested. What is going on. multiprocessing.Pool methods like the apply* and *map* methods

How to terminate long-running computation (CPU bound task) in Python using asyncio and concurrent.futures.ProcessPoolExecutor?

徘徊边缘 提交于 2019-11-29 05:19:19
Similar Question (but answer does not work for me): How to cancel long-running subprocesses running using concurrent.futures.ProcessPoolExecutor? Unlike the question linked above and the solution provided, in my case the computation itself is rather long (CPU bound) and cannot be run in a loop to check if some event has happened. Reduced version of the code below: import asyncio import concurrent.futures as futures import time class Simulator: def __init__(self): self._loop = None self._lmz_executor = None self._tasks = [] self._max_execution_time = time.monotonic() + 60 self._long_running