I'm implementing a camera application on Android devices. Currently, I use Camera2 API and ImageReader to get image data in YUV_420_888
format, but I don't know how to exactly write these data to MediaCodec.
Here are my questions:
- What is
YUV_420_888
?
The format YUV_420_888
is ambiguous because it can be any format which belongs to the YUV420
family, such as YUV420P
, YUV420PP
, YUV420SP
and YUV420PSP
, right?
By accessing the image's three planes(#0, #1, #2), I can get the Y(#0), U(#1), V(#2) values of this image. But the arrangement of these values may not be the same on different devices. For example, if YUV_420_888
truly means YUV420P
, the size of both Plane#1 and Plane#2 is a quarter of the size of Plane#0. If YUV_420_888
truly means YUV420SP
, the size of both Plane#1 and Plane#2 is half of the size of Plane#0(Each of Plane#1 and Plane#2 contains U, V values).
If I want to write these data from image's three planes to MediaCodec, what kind of format I need to convert to? YUV420, NV21, NV12, ...?
- What is
COLOR_FormatYUV420Flexible
?
The format COLOR_FormatYUV420Flexible
is also ambiguous because it can be any format which belongs to the YUV420
family, right? If I set KEY_COLOR_FORMAT
option of a MediaCodec object to COLOR_FormatYUV420Flexible
, what format(YUV420P, YUV420SP...?) of data should I input to the MediaCodec object?
- How about using
COLOR_FormatSurface
?
I know MediaCodec has its own surface, which can be used if I set KEY_COLOR_FORMAT
option of a MediaCodec object to COLOR_FormatSurface
. And with Camera2 API, I don't need to write any data by myself to the MediaCodec object. I can just drain the output buffer.
However, I need to change the image from the camera. For example, draw other pictures, write some text on it, or insert another video as POP(Picture of Picture).
Can I use ImageReader to read the image from Camera, and after re-drawing that, write the new data to MediaCodec's surface, and then drain it out? How to do that?
EDIT1
I implemented the function by using COLOR_FormatSurface
and RenderScript. Here is my code:
onImageAvailable
method:
public void onImageAvailable(ImageReader imageReader) { try { try (Image image = imageReader.acquireLatestImage()) { if (image == null) { return; } Image.Plane[] planes = image.getPlanes(); if (planes.length >= 3) { ByteBuffer bufferY = planes[0].getBuffer(); ByteBuffer bufferU = planes[1].getBuffer(); ByteBuffer bufferV = planes[2].getBuffer(); int lengthY = bufferY.remaining(); int lengthU = bufferU.remaining(); int lengthV = bufferV.remaining(); byte[] dataYUV = new byte[lengthY + lengthU + lengthV]; bufferY.get(dataYUV, 0, lengthY); bufferU.get(dataYUV, lengthY, lengthU); bufferV.get(dataYUV, lengthY + lengthU, lengthV); imageYUV = dataYUV; } } } catch (final Exception ex) { } }
Convert YUV_420_888 to RGB:
public static Bitmap YUV_420_888_toRGBIntrinsics(Context context, int width, int height, byte[] yuv) { RenderScript rs = RenderScript.create(context); ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs)); Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(yuv.length); Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT); Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height); Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT); Bitmap bmpOut = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888); in.copyFromUnchecked(yuv); yuvToRgbIntrinsic.setInput(in); yuvToRgbIntrinsic.forEach(out); out.copyTo(bmpOut); return bmpOut; }
MediaCodec:
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface); ... mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE); ... surface = mediaCodec.createInputSurface(); // This surface is not used in Camera APIv2. Camera APIv2 uses ImageReader's surface.
And in athother thread:
while (!stop) { final byte[] image = imageYUV; // Do some yuv computation Bitmap bitmap = YUV_420_888_toRGBIntrinsics(getApplicationContext(), width, height, image); Canvas canvas = surface.lockHardwareCanvas(); canvas.drawBitmap(bitmap, matrix, paint); surface.unlockCanvasAndPost(canvas); }
This way works, but the performance is not good. It can't output 30fps video files(only ~12fps). Perhaps I should not use COLOR_FormatSurface
and the surface's canvas for encoding. The computed YUV data should be written to the mediaCodec directly without any surface doing any conversion. But I still don't know how to do that.