There are two sides to my app, on one side I'm using C++ in order to read the frames from a camera using Pleora's EBUS SDK. When this stream is first received, before I convert the buffer to an image, I am able to read the stream 16 bits at a time in order to perform some calculations for each pixel, i.e. there exists a 16 bit chunk of data for each pixel.
Now the second half is a Django web app where I am also presented this video output, this time via an ffmpeg, nginx, hls stream. When the user clicks on the video I want to be able to take the current frame and the coordinates of their click and perform the same calculation as I do above in the C++ portion.
Right now I use an html5 canvas to capture the frame and I use canvas.toDataURL()
in order to convert the frame into a base64 encoded image, I then pass the base64 image, the coordinates, and the dimensions of the frame to python via AJAX.
In python I am trying to manipulate this base64 encoded image in such a way as to get 16 bits per pixel. At the moment I do the following:
pos = json.loads(request.GET['pos']) str_frame = json.loads(request.GET['frame']) dimensions = json.loads(request.GET['dimensions']) pixel_index = (dimensions['width'] * pos['y']) + pos['x'] + 1 b64decoded_frame = base64.b64decode(str_frame.encode('utf-8'))
However, there are far fewer indexes in the b64decoded_frame
then there are pixels in the image and the integer values aren't nearly as high as expected. I have checked and the image is intact as I am able to save it as a png.
To summarize, how do I convert a base64 image into a serialized binary stream where each pixel is represented by 16 bits.
UPDATE
I forgot to mention that I'm using python3.2
And after some more research I think that what I'm trying to do it get the mono16 value of a given pixel. I don't know for sure if that is what I want to do but if anyone could explain how to either convert an image to mono16 or a pixel to mono16 I could explore that and see if it is in fact the solution.