python-multiprocessing

Terminating a process breaks python curses

牧云@^-^@ 提交于 2019-12-21 06:35:21
问题 Using python multiprocessing and curses, it appears that terminating a Process interferes with curses display. For example, in the following code, why does terminating the process stops curses from displaying the text ? (pressing b after pressing a) More precisely, it seems that not only the string "hello" is not displayed anymore but also the whole curses window. import curses from multiprocessing import Process from time import sleep def display(stdscr): stdscr.clear() curses.newwin(0,0)

Difference in behavior between os.fork and multiprocessing.Process

我只是一个虾纸丫 提交于 2019-12-21 05:14:06
问题 I have this code : import os pid = os.fork() if pid == 0: os.environ['HOME'] = "rep1" external_function() else: os.environ['HOME'] = "rep2" external_function() and this code : from multiprocessing import Process, Pipe def f(conn): os.environ['HOME'] = "rep1" external_function() conn.send(some_data) conn.close() if __name__ == '__main__': os.environ['HOME'] = "rep2" external_function() parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv()

Basic multiprocessing with while loop

南楼画角 提交于 2019-12-20 19:47:11
问题 I am brand new to the multiprocessing package in python and my confusion will probably be easy for someone who knows more to clear up. I've been reading about concurrency and have searched for other questions like this and have found nothing. (FYI I do NOT want to use multithreading because the GIL will slow down my application a lot.) I am thinking in the framework of events. I want to have multiple processes running, waiting for an event to happen. If the event happens, it gets assigned to

Parallelizing four nested loops in Python

夙愿已清 提交于 2019-12-20 10:25:32
问题 I have a fairly straightforward nested for loop that iterates over four arrays: for a in a_grid: for b in b_grid: for c in c_grid: for d in d_grid: do_some_stuff(a,b,c,d) # perform calculations and write to file Maybe this isn't the most efficient way to perform calculations over a 4D grid to begin with. I know joblib is capable of parallelizing two nested for loops like this, but I'm having trouble generalizing it to four nested loops. Any ideas? 回答1: I usually use code of this form: #!/usr

Why does this queue behave like this?

安稳与你 提交于 2019-12-20 06:47:34
问题 I'm writing an object which draws from a multiprocessing queue, and I found that when I run this code, I get data = [] . Whereas if I tell the program to sleep for a little bit at the place denoted, I get data = [1,2] , as it should. from multiprocessing import Queue import time q = Queue() taken = 0 data = [] for i in [1,2]: q.put(i) # time.sleep(1) making this call causes the correct output while not q.empty() and taken < 2: try: data.append(q.get(timeout=1)) taken+=1 except Empty: continue

AttributeError 'DupFd' in 'multiprocessing.resource_sharer' | Python multiprocessing + threading

試著忘記壹切 提交于 2019-12-20 05:37:08
问题 I'm trying to communicate between multiple threading.Thread (s) doing I/O-bound tasks and multiple multiprocessing.Process (es) doing CPU-bound tasks. Whenever a thread finds work for a process, it will be put on a multiprocessing.Queue , together with the sending end of a multiprocessing.Pipe(duplex=False) . The processes then do their part and send results back to the threads via the Pipe. This procedure seems to work in roughly 70% of the cases, the other 30% I receive an AttributeError:

Where to call join() when multiprocessing

此生再无相见时 提交于 2019-12-20 04:23:33
问题 When using multiprocessing in Python, I usually see examples where the join() function is called in a separate loop to where each process was actually created. For example, this: processes = [] for i in range(10): p = Process(target=my_func) processes.append(p) p.start() for p in processes: p.join() is more common than this: processes = [] for i in range(10): p = Process(target=my_func) processes.append(p) p.start() p.join() But from my understanding of join() , it just tells the script not

Python: How to run nested parallel process in python?

元气小坏坏 提交于 2019-12-20 02:59:17
问题 I have a dataset df of trader transactions. I have 2 levels of for loops as follows: smartTrader =[] for asset in range(len(Assets)): df = df[df['Assets'] == asset] # I have some more calculations here for trader in range(len(df['TraderID'])): # I have some calculations here, If trader is successful, I add his ID # to the list as follows smartTrader.append(df['TraderID'][trader]) # some more calculations here which are related to the first for loop. I would like to parallelise the

Multiprocessing works in Ubuntu, doesn't in Windows

∥☆過路亽.° 提交于 2019-12-19 09:08:00
问题 I am trying to use this example as a template for a queuing system on my cherrypy app. I was able to convert it from python 2 to python 3 (change from Queue import Empty into from queue import Empty ) and to execute it in Ubuntu. But when I execute it in Windows I get the following error: F:\workspace\test>python test.py Traceback (most recent call last): File "test.py", line 112, in <module> broker.start() File "C:\Anaconda3\lib\multiprocessing\process.py", line 105, in start self._popen =

Why is my Python app stalled with 'system' / kernel CPU time

南楼画角 提交于 2019-12-19 07:21:28
问题 First off I wasn't sure if I should post this as a Ubuntu question or here. But I'm guessing it's more of an Python question than a OS one. My Python application is running on top of Ubuntu on a 64 core AMD server. It pulls images from 5 GigE cameras over the network by calling out to a .so through ctypes and then processes them. I am seeing frequent pauses in my application causing frames from the cameras to be dropped by the external camera library. To debug this I've used the popular