Segmentation fault with Array in multiprocessing.sharedctypes

馋奶兔 提交于 2019-12-11 03:49:53

问题


I allocate a multiprocessing.sharedctypes.Array in order to be shared among processes. Wrapping this program is a generator that generate the result computed from that array. Before using the array for any parallel computation, I encounter Segmentation fault error after 2 iterations which I doubt caused by deallocation mechanism between C and Python. I can reproduce the error using the following simple code snippet:

import numpy as np
from multiprocessing import Pool, sharedctypes                                                                         

def generator(niter):

    while niter > 0:
        tmp = np.ctypeslib.as_ctypes(np.random.randn(300, 100000))                                                     
        shared_arr = sharedctypes.Array(tmp._type_, tmp, lock=False)                                                   
        table = np.ctypeslib.as_array(shared_arr)                                                                      
        yield np.sum(table)
        niter -=1                                                                                                      

for s in generator(10):                                                                                                
    print s

If I reduce the array's size, i.e 3x10 instead of 300x100000, then the Segmentation Fault won't occur. It's not memory insufficient as my PC has 16GB and just 6GB is used at the time. A 300x100,000 array is just takes few hundreds of MB.

来源:https://stackoverflow.com/questions/40708439/segmentation-fault-with-array-in-multiprocessing-sharedctypes

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!