Why are multiprocessing.sharedctypes assignments so slow?

前端 未结 3 1998
你的背包
你的背包 2020-12-16 03:36

Here\'s a little bench-marking code to illustrate my question:

import numpy as np
import multiprocessing as mp
# allocate memory
%time temp = mp.RawArray(np.         


        
3条回答
  •  北荒
    北荒 (楼主)
    2020-12-16 04:06

    Just put a numpy array around the shared array:

    import numpy as np
    import multiprocessing as mp
    
    sh = mp.RawArray('i', int(1e8))
    x = np.arange(1e8, dtype=np.int32)
    sh_np = np.ctypeslib.as_array(sh)
    

    then time:

    %time sh[:] = x
    CPU times: user 10.1 s, sys: 132 ms, total: 10.3 s
    Wall time: 10.2 s
    
    %time memoryview(sh).cast('B').cast('i')[:] = x
    CPU times: user 64 ms, sys: 132 ms, total: 196 ms
    Wall time: 196 ms
    
    %time sh_np[:] = x
    CPU times: user 92 ms, sys: 104 ms, total: 196 ms
    Wall time: 196 ms
    

    No need to figure out how to cast the memoryview (as I had to in python3 Ubuntu 16) and mess with reshaping (if x has more dimensions, since cast() flattens). And use sh_np.dtype.name to double check data types just like any numpy array. :)

提交回复
热议问题