Restoring a numpy array representing an image after base64 encoding

空扰寡人 提交于 2020-01-13 18:01:05

问题


Using base64 encoding on images gives the opportunity to fully restore the image to its original shape, with all its dimension (2D, RGB) without knowing the resolutions directly - they are stored inside the base64 information

However, when I have a numpy array representing an image like:

test_image = np.random.rand(10,10,3)

and then put it into base64 with:

b64_test_image = base64.b64encode(test_image)

I am able to get back to the content of the array with:

decoded = base64.b64decode(b64_test_image)
test_image_1D = np.frombuffer(decoded)

However, test_image_1D is only one-dimensional in comparison to the orignal image which had the dimension 10x10x3. Is it possible to restore the orignal array without knowing the buffer size, like it is the case with images?


回答1:


Assuming your data is always an image, you need to use a library to get the image back from the base64 encoded string. For example with OpenCV:

retval, buffer = cv2.imencode('.jpg', test_image)
jpg_as_text = base64.b64encode(buffer)
nparr = np.fromstring(base64.b64decode(jpg_as_text), np.uint8)
img2 = cv2.imdecode(nparr, cv2.IMREAD_COLOR)


来源:https://stackoverflow.com/questions/49735187/restoring-a-numpy-array-representing-an-image-after-base64-encoding

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!