问题
I allocate a multiprocessing.sharedctypes.Array in order to be shared among processes. Wrapping this program is a generator that generate the result computed from that array. Before using the array for any parallel computation, I encounter Segmentation fault error after 2 iterations which I doubt caused by deallocation mechanism between C and Python. I can reproduce the error using the following simple code snippet:
import numpy as np
from multiprocessing import Pool, sharedctypes                                                                         
def generator(niter):
    while niter > 0:
        tmp = np.ctypeslib.as_ctypes(np.random.randn(300, 100000))                                                     
        shared_arr = sharedctypes.Array(tmp._type_, tmp, lock=False)                                                   
        table = np.ctypeslib.as_array(shared_arr)                                                                      
        yield np.sum(table)
        niter -=1                                                                                                      
for s in generator(10):                                                                                                
    print s
If I reduce the array's size, i.e 3x10 instead of 300x100000, then the Segmentation Fault won't occur. It's not memory insufficient as my PC has 16GB and just 6GB is used at the time. A 300x100,000 array is just takes few hundreds of MB.
来源:https://stackoverflow.com/questions/40708439/segmentation-fault-with-array-in-multiprocessing-sharedctypes