Right way to share opencv video frame as Numpy array between multiprocessing processes

早过忘川 提交于 2019-12-11 17:33:56

问题


I want to share my captured frame in OpenVC with my multiprocessing subprocess but video_capture.read() creates a new object and doesnt write in to my numpy array that iam going to share by wrapping it with multiprocessing.Array()

Here is the code:

ret, frame = video_capture.read()
shared_array = mp.Array(ctypes.c_uint16, frame.shape[0] * frame.shape[1], lock=False)
while True:
    b = np.frombuffer(shared_array)
    ret, b = video_capture.read()

But the buffer b gets overridden by the read() function. So i dont write in to my buffer/shared array.

In subprocess i do following:

img = np.frombuffer(shared_array)
cv2.imshow('Video', img)

The shared array only containing the first frame of the video. How can i correctly write into the shared array to be able to read every frame in subprocess? I dont want to use queues or pipes. Shared memory is the way to go coz more processes will consume the frames.


回答1:


There are a couple of issues, one is the size and shape of the shared array and another is how you access it. To solve the first problem, first you need to make sure that the size of the created array corresponds to the size of a video frame. You have read a frame and use its width and height (although you can do it without reading any frame), but did not account for its number of channels.

ret, frame = video_capture.read()
shape = frame.shape
shared_array = mp.Array(ctypes.c_uint16, shape[0] * shape[1] * shape[2], lock=False)

You chose uint16 as data type, which is fine (again, you can use the video_capture.get(cv2.CAP_PROP_FORMAT) to get the exact data type of the frames, but you can choose whatever you want, since NumPy will convert the values to the array data type). However, when you create the NumPy array you must specify that is the data type you want to use, otherwise it will use float64 by default:

b = np.frombuffer(shared_array, dtype=np.uint16)

And then you have to reshape it to have the shape of a frame (not that this does not create a new NumPy array, it's just a view, so it still uses the shared buffer):

b = b.reshape(shape)

Finally, when you read the frame, you don't want to overwrite the b variable, but rather write to the array. Also, you should only do that when a frame was actually read:

while True:
    ret, frame = video_capture.read()
    if ret:
        b[:] = frame


来源:https://stackoverflow.com/questions/49191615/right-way-to-share-opencv-video-frame-as-numpy-array-between-multiprocessing-pro

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!