How seperate y-planar, u-planar and uv-planar from yuv bi planar in ios?

☆樱花仙子☆ 提交于 2019-12-01 00:11:50

The Y plane represents the luminance component, and the UV plane represents the Cb and Cr chroma components.

In the case of kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format, you will find the luma plane is 8bpp with the same dimensions as your video, your chroma plane will be 16bpp, but only a quarter of the size of the original video. You will have one Cb and one Cr component per pixel on this plane.

so if your input video is 352x288, your Y plane will be 352x288 8bpp, and your CbCr 176x144 16bpp. This works out to be about the same amount of data as a 12bpp 352x288 image, half what would be required for RGB888 and still less than RGB565.

So in the buffer, Y will look like this [YYYYY . . . ] and UV [UVUVUVUVUV . . .]

vs RGB being, of course, [RGBRGBRGB . . . ]

jie tang

Below code copy yuv data from pixelBuffer whose format is kCVPixelFormatType_420YpCbCr8BiPlanarFullRange.

CVPixelBufferLockBaseAddress(pixelBuffer, 0);

size_t pixelWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t pixelHeight = CVPixelBufferGetHeight(pixelBuffer);
// y bite size
size_t y_size = pixelWidth * pixelHeight;
// uv bite size
size_t uv_size = y_size / 2;
uint8_t *yuv_frame = malloc(uv_size + y_size);
// get base address of y
uint8_t *y_frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// copy y data
memcpy(yuv_frame, y_frame, y_size);
// get base address of uv
uint8_t *uv_frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
// copy uv data
memcpy(yuv_frame + y_size, uv_frame, uv_size);

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!