Making my NumPy array shared across processes

匿名 (未验证) 提交于 2019-12-03 01:48:02

问题:

I have read quite a few of the questions on SO about sharing arrays and it seems simple enough for simple arrays but I am stuck trying to get it working for the array I have.

import numpy as np data=np.zeros(250,dtype='float32, (250000,2)float32') 

I have tried converting this to a shared array by trying to somehow make mp.Array accept the data, I have also tried creating the array as using ctypes as such:

import multiprocessing as mp data=mp.Array('c_float, (250000)c_float',250) 

The only way I have managed to get my code working is not passing data to the function but passing an encoded string to be uncompressed/decoded, this would however end up in n (number of strings) processes being called which seems redundant. My desired implementation is based on slicing the list of binary strings into x (number of processes) and passing this chunk, data and an index to the processes which works except that data is modified locally, hence the question on how to make it shared, any example working with a custom (nested) numpy array would already be a great help.

PS: This question is a follow up from Python multi-processing

回答1:

Note that you can start out with an array of complex dtype:

In [4]: data = np.zeros(250,dtype='float32, (250000,2)float32') 

and view it as an array of homogenous dtype:

In [5]: data2 = data.view('float32') 

and later, convert it back to complex dtype:

In [7]: data3 = data2.view('float32, (250000,2)float32') 

Changing the dtype is a very quick operation; it does not affect the underlying data, only the way NumPy interprets it. So changing the dtype is virtually costless.

So what you've read about arrays with simple (homogenous) dtypes can be readily applied to your complex dtype with the trick above.


The code below borrows many ideas from J.F. Sebastian's answer, here.

import numpy as np import multiprocessing as mp import contextlib import ctypes import struct import base64   def decode(arg):     chunk, counter = arg     print len(chunk), counter     for x in chunk:         peak_counter = 0         data_buff = base64.b64decode(x)         buff_size = len(data_buff) / 4         unpack_format = ">%dL" % buff_size         index = 0         for y in struct.unpack(unpack_format, data_buff):             buff1 = struct.pack("I", y)             buff2 = struct.unpack("f", buff1)[0]             with shared_arr.get_lock():                 data = tonumpyarray(shared_arr).view(                     [('f0', '

If you can guarantee that the various processes which execute the assignments

if (index % 2 == 0):     data[counter][1][peak_counter][0] = float(buff2) else:     data[counter][1][peak_counter][1] = float(buff2) 

never compete to alter the data in the same locations, then I believe you can actually forgo using the lock

with shared_arr.get_lock(): 

but I don't grok your code well enough to know for sure, so to be on the safe side, I included the lock.



回答2:

from multiprocessing import Process, Array import numpy as np import time import ctypes  def fun(a):     a[0] = -a[0]     while 1:         time.sleep(2)         #print bytearray(a.get_obj())         c=np.frombuffer(a.get_obj(),dtype=np.float32)         c.shape=3,3         print 'haha',c   def main():     a = np.random.rand(3,3).astype(np.float32)     a.shape=1*a.size     #a=np.array([[1,3,4],[4,5,6]])     #b=bytearray(a)     h=Array(ctypes.c_float,a)     print "Originally,",h      # Create, start, and finish the child process     p = Process(target=fun, args=(h,))     p.start()     #p.join()     a.shape=3,3     # Print out the changed values     print 'first',a     time.sleep(3)     #h[0]=h[0]+1     print 'main',np.frombuffer(h.get_obj(), dtype=np.float32)    if __name__=="__main__":     main() 


标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!