python-multiprocessing

python multiprocessing. Pool got stuck after long execution

旧巷老猫 提交于 2020-01-23 10:43:26
问题 I am developing a tool that analyzes huge files. In order to do that faster I introduced multiprocessing on it and everything seems to work fine.In order to do it I am using multiprocessing.pool creating N threads, and they handle different chunks of work I previously created. pool = Pool(processes=params.nthreads) for chunk in chunk_list: pool.apply_async(__parallel_quant, [filelist, chunk, outfilename]) pool.close() pool.join() As you can see, this is standard pool execution, with no

How to cancel long-running subprocesses running using `concurrent.futures.ProcessPoolExecutor`?

為{幸葍}努か 提交于 2020-01-22 16:58:05
问题 You can see the full here. A simplified version of my code follows: executor = ProcessPoolExecutor(10) try: coro = bot.loop.run_in_executor(executor, processUserInput, userInput) result = await asyncio.wait_for(coro, timeout=10.0, loop=bot.loop) except asyncio.TimeoutError: result="Operation took longer than 10 seconds. Aborted." Unfortunately, when an operation times out, that process is still running, even though the future has been cancelled. How do I cancel that process/task so that it

Purpose of multiprocessing.Pool.apply and multiprocessing.Pool.apply_async

耗尽温柔 提交于 2020-01-21 19:33:09
问题 See example and execution result below: #!/usr/bin/env python3.4 from multiprocessing import Pool import time import os def initializer(): print("In initializer pid is {} ppid is {}".format(os.getpid(),os.getppid())) def f(x): print("In f pid is {} ppid is {}".format(os.getpid(),os.getppid())) return x*x if __name__ == '__main__': print("In main pid is {} ppid is {}".format(os.getpid(), os.getppid())) with Pool(processes=4, initializer=initializer) as pool: # start 4 worker processes result =

Purpose of multiprocessing.Pool.apply and multiprocessing.Pool.apply_async

柔情痞子 提交于 2020-01-21 19:31:13
问题 See example and execution result below: #!/usr/bin/env python3.4 from multiprocessing import Pool import time import os def initializer(): print("In initializer pid is {} ppid is {}".format(os.getpid(),os.getppid())) def f(x): print("In f pid is {} ppid is {}".format(os.getpid(),os.getppid())) return x*x if __name__ == '__main__': print("In main pid is {} ppid is {}".format(os.getpid(), os.getppid())) with Pool(processes=4, initializer=initializer) as pool: # start 4 worker processes result =

My process finishes its `run` function, but it doesn't die

∥☆過路亽.° 提交于 2020-01-21 18:48:50
问题 I'm subclassing multiprocessing.Process to create a class that will asynchronously grab images from a camera and push them to some queues for display and saving to disk. The problem I'm having is that when I issue a stop command using a multiprocessing.Event object that belongs to the Process-descendant-object, the process successfully completes the last line of the run function, but then it doesn't die . The process just continues to exist and continues to return true from the is_alive

multiprocessing gives AssertionError: daemonic processes are not allowed to have children

点点圈 提交于 2020-01-21 09:05:26
问题 I am trying to use multiprocessing for the first time. So I thought I would make a very simple test example which factors 100 different numbers. from multiprocessing import Pool from primefac import factorint N = 10**30 L = range(N,N + 100) pool = Pool() pool.map(factorint, L) This gives me the error: Traceback (most recent call last): File "test.py", line 8, in <module> pool.map(factorint, L) File "/usr/lib/python2.7/multiprocessing/pool.py", line 251, in map return self.map_async(func,

Python 2.7: How to compensate for missing pool.starmap?

一笑奈何 提交于 2020-01-21 07:51:46
问题 I have defined this function def writeonfiles(a,seed): random.seed(seed) f = open(a, "w+") for i in range(0,10): j = random.randint(0,10) #print j f.write(j) f.close() Where a is a string containing the path of the file and seed is an integer seed. I want to parallelize a simple program in such a way that each core takes one of the available paths that I give in, seeds its random generator and write some random numbers on that files, so, for example, if I pass the vector vector = [Test/file1

Run GUI concurrently with Flask application

不想你离开。 提交于 2020-01-17 08:40:01
问题 I'm trying to build a simple tkinter GUI window around my flask application for noobs in my office. I want the script to perform these tasks in the following order: Start the flask web server Open a tkinter GUI window with one button. When pressed, that button opens the app's index page (e.g. http://127.0.0.1:5000) Terminate the flask web server when the tkinter gui window is closed This is what I have so far but the app runs independently of the tkinter window and I must terminate the flask

Importable `multiprocessing.Pool` function

亡梦爱人 提交于 2020-01-16 19:48:08
问题 This is probably simple and I'm just not finding a suitable question. If I do a stand-alone script with multiprocessing.Pool i know I'm supposed to do: def foo(x): return x**2 if __name__ == '__main__': with Pool(n_jobs) as p: p.map(foo, list_of_inputs) But if I want to then make it an importable function, I assume __name__ will no longer be '__main__' . Is it safe to just do: def __foo(x): return x**2 def bar(list_of_inputs, n_jobs): with Pool(n_jobs) as p: out = p.map(__foo, list_of_inputs)

Importing Tensorflow 2.0 gpu from different Processes

柔情痞子 提交于 2020-01-16 09:03:09
问题 I'm working on a project in which I got a python module that implements an iterative process and some computations are performed by GPU using tensorflow 2.0. The module works right when used stand-alone from a single process. Since I have to perform several runs with different parameters I'd like to parallelize the calls, but when I call the module (which imports tensorflow) from a different process, I got CUDA_ERROR_OUT_OF_MEMORY and an infinite loop of CUDA_ERROR_NOT_INITIALIZED , so the