Save frame from TangoService_connectOnFrameAvailable

大憨熊 提交于 2019-12-31 05:52:30

问题


How can I save a frame via TangoService_connectOnFrameAvailable() and display it correctly on my computer? As this reference page mentions, the pixels are stored in the HAL_PIXEL_FORMAT_YV12 format. In my callback function for TangoService_connectOnFrameAvailable, I save the frame like this:

static void onColorFrameAvailable(void* context, TangoCameraId id, const TangoImageBuffer* buffer) 
{
  ...
  std::ofstream fp;
  fp.open(imagefile, std::ios::out | std::ios::binary );
  int offset = 0;
  for(int i = 0; i < buffer->height*2 + 1; i++) {
    fp.write((char*)(buffer->data + offset), buffer->width);
    offset += buffer->stride;
  }
  fp.close();
}

Then to get rid of the meta data in the first row and to display the image I run:

$ dd if="input.raw" of="new.raw" bs=1 skip=1280
$ vooya new.raw

I was careful to make sure in vooya that the channel order is yvu. The resulting output is:

What am I doing wrong in saving the image and displaying it?

UPDATE per Mark Mullin's response:

int offset = buffer->stride; // header offset
// copy Y channel
for(int i = 0; i < buffer->height; i++) {
  fp.write((char*)(buffer->data + offset), buffer->width);
  offset += buffer->stride;
}
// copy V channel
for(int i = 0; i < buffer->height / 2; i++) {
  fp.write((char*)(buffer->data + offset), buffer->width / 2);
  offset += buffer->stride / 2;
}
// copy U channel
for(int i = 0; i < buffer->height / 2; i++) {
  fp.write((char*)(buffer->data + offset), buffer->width / 2);
  offset += buffer->stride / 2;
}

This now shows the picture below, but there are still some artifacts; I wonder if that's from the Tango tablet camera or my processing of the raw data... any thoughts?


回答1:


Can't say exactly what you're doing wrong AND tango images often have artifacts in them - yours are new, but I often see baby blue as a color where glare seems to be annoying deeper systems, and as it begins to loose sync with the depth system under load, you'll often see what looks like a shiny grid (its the IR pattern, I think) - At the end, any rational attempt to handle the image with openCV etc failed, so I hand wrote the decoder with some help from SO thread here

That said, given imagebuffer contains a pointer to the raw data from Tango, and various other variables like height and stride are filled in from the data received in the callback, then this logic will create an RGBA map - yeah, I optimized the math in it, so it's a little ugly - it's slower but functionally equivalent twin is listed second. My own experience says its a horrible idea to try and do this decode right in the callback (I believe Tango is capable of loosing sync with the flash for depth for purely spiteful reasons), so mine runs at the render stage.

Fast

uchar* pData = TangoData::cameraImageBuffer;
uchar* iData = TangoData::cameraImageBufferRGBA;
int size = (int)(TangoData::imageBufferStride * TangoData::imageBufferHeight);
float invByte = 0.0039215686274509803921568627451;  // ( 1 / 255)

int halfi, uvOffset, halfj, uvOffsetHalfj;
float y_scaled, v_scaled, u_scaled;
int uOffset = size / 4 + size;
int halfstride = TangoData::imageBufferStride / 2;
for (int i = 0; i < TangoData::imageBufferHeight; ++i)
{
    halfi = i / 2;
    uvOffset = halfi * halfstride;
    for (int j = 0; j < TangoData::imageBufferWidth; ++j)
    {
        halfj = j / 2;
        uvOffsetHalfj = uvOffset + halfj;
        y_scaled = pData[i * TangoData::imageBufferStride + j] * invByte;
        v_scaled = 2 * (pData[uvOffsetHalfj + size] * invByte - 0.5f) * Vmax;
        u_scaled = 2 * (pData[uvOffsetHalfj + uOffset] * invByte - 0.5f) * Umax;
        *iData++ = (uchar)((y_scaled + 1.13983f * v_scaled) * 255.0);;
        *iData++ = (uchar)((y_scaled - 0.39465f * u_scaled - 0.58060f * v_scaled) * 255.0);
        *iData++ = (uchar)((y_scaled + 2.03211f * u_scaled) * 255.0);
        *iData++ = 255;
    }
}

Understandable

for (int i = 0; i < TangoData::imageBufferHeight; ++i)
{
    for (int j = 0; j < TangoData::imageBufferWidth; ++j)
    {
        uchar y = pData[i * image->stride + j];
        uchar v = pData[(i / 2) * (TangoData::imageBufferStride / 2) + (j / 2) + size];
        uchar u = pData[(i / 2) * (TangoData::imageBufferStride / 2) + (j / 2) + size + (size / 4)];
        YUV2RGB(y, u, v);
        *iData++ = y;
        *iData++ = u;
        *iData++ = v;
        *iData++ = 255;
    }
}



回答2:


I think that there is a better way to do if you can to do it offline. The best way to save the image should be something like this (don't forgot to create the folder Pictures or you won't save anything)

void onFrameAvailableRouter(void* context, TangoCameraId id, const TangoImageBuffer* buffer) {
  //To write the image in a txt file.
  std::stringstream name_stream;
  name_stream.setf(std::ios_base::fixed, std::ios_base::floatfield);
  name_stream.precision(3);
  name_stream << "/storage/emulated/0/Pictures/"
                <<cur_frame_timstamp_
                <<".txt";

  std::fstream f(name_stream.str().c_str(), std::ios::out | std::ios::binary);
  // size = 1280*720*1.5 to save YUV or 1280*720 to save grayscale
  int size = stride_ * height_ * 1.5;
  f.write((const char *) buffer->data,size * sizeof(uint8_t));
  f.close();
}

Then to convert the .txt file to png you can do this

inputFolder = "input"
outputFolderRGB = "output/rgb"
outputFolderGray = "output/gray"

input_filename  = "timestamp.txt"
output_filename = "rgb.png"
allFile = listdir(inputFolder)
numberOfFile = len(allFile)

if "input" in glob.glob("*"):
    if  "output/rgb" in glob.glob("output/*"):
        print ""
    else:
        makedirs("output/rgb")
        if "output/gray" in glob.glob("output/*"):
            print ""
        else:
            makedirs("output/gray")

    #The output reportories are ready
    for file in allFile:
        count+=1
        print "current file : ",count,"/",numberOfFile
        input_filename = file
        output_filename = input_filename[0:(len(input_filename)-3)]+"png"

        # load file into buffer
        data = np.fromfile(inputFolder+"/"+input_filename, dtype=np.uint8)  

        #To get RGB image  
        # create yuv image
        yuv = np.ndarray((height + height / 2, width), dtype=np.uint8, buffer=data)    
        # create a height x width x channels matrix with the datatype uint8 for rgb image
        img = np.zeros((height, width, channels), dtype=np.uint8);    
        # convert yuv image to rgb image
        cv2.cvtColor(yuv, cv2.COLOR_YUV2BGRA_NV21, img, channels)
        cv2.imwrite(outputFolderRGB+"/"+output_filename, img)

        #If u saved the image in graysacale use this part instead
        #yuvReal = np.ndarray((height, width), dtype=np.uint8, buffer=data)
        #cv2.imwrite(outputFolderGray+"/"+output_filename, yuvReal)
else:
    print "not any input"

You just have to put your .txt in a folder input It's a python script but if you prefer a c++ version it's very close.



来源:https://stackoverflow.com/questions/28157148/save-frame-from-tangoservice-connectonframeavailable

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!