I’m trying to convert YUV_420_888 images into bitmaps, coming from the camera2 preview. But the output image has incorrect colors.
Next is the test code I’m running
There is no direct way to copy a YUV_420_888 camera frame in into an RS Allocation. Actually, as of today, Renderscript does not support this format.
If you know that, under the hood, your frame is NV21 or YV12 - you can copy the entire ByteBuffer to an array and pass it to RS Allocation.
RenderScript support YUV_420_888 as ScriptIntrinsicYuvToRGB's source
create Allocation & ScriptIntrinsicYuvToRGB
RenderScript renderScript = RenderScript.create(this);
ScriptIntrinsicYuvToRGB mScriptIntrinsicYuvToRGB = ScriptIntrinsicYuvToRGB.create(renderScript, Element.YUV(renderScript));
Allocation mAllocationInYUV = Allocation.createTyped(renderScript, new Type.Builder(renderScript, Element.YUV(renderScript)).setYuvFormat(ImageFormat.YUV_420_888).setX(480).setY(640).create(), Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
Allocation mAllocationOutRGB = Allocation.createTyped(renderScript, Type.createXY(renderScript, Element.RGBA_8888(renderScript), 480, 640), Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_OUTPUT);
set Allocation.getSurface() to receive image data from camera
final CaptureRequest.Builder captureRequest = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequest.addTarget(mAllocationInYUV.getSurface());
output to a TextureView or ImageReader or SurfaceView
mAllocationOutRGB.setSurface(new Surface(mTextureView.getSurfaceTexture()));
mAllocationInYUV.setOnBufferAvailableListener(new Allocation.OnBufferAvailableListener() {
@Override
public void onBufferAvailable(Allocation a) {
a.ioReceive();
mScriptIntrinsicYuvToRGB.setInput(a);
mScriptIntrinsicYuvToRGB.forEach(mAllocationOutRGB);
mAllocationOutRGB.ioSend();
}
});
Answering my own question, the actual problem was as I suspected in how I was transforming the Image planes into the ByteBuffer. Next the solution, which should work for both NV21 & YV12. As the YUV data already comes in separate planes, is just a matter of getting it the correct way based on their row and pixel strides. Also needed to do some minor modifications in how the data is passed to the RenderScript intrinsic.
NOTE: For a production optimized onImageAvailable() uninterrupted flow, instead, before doing the conversion the Image byte data should be copied into a separate buffer and the conversion executed in a separate thread (depending on your requirements). But since this isn't part of the question, in the next code the conversion is placed directly into onImageAvailable() to simplify the answer. If anyone needs to know how to copy the Image data please create a new question and let me know so I will share my code.
@Override
public void onImageAvailable(ImageReader reader)
{
// Get the YUV data
final Image image = reader.acquireLatestImage();
final ByteBuffer yuvBytes = this.imageToByteBuffer(image);
// Convert YUV to RGB
final RenderScript rs = RenderScript.create(this.mContext);
final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);
final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
allocationYuv.copyFrom(yuvBytes.array());
ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
scriptYuvToRgb.setInput(allocationYuv);
scriptYuvToRgb.forEach(allocationRgb);
allocationRgb.copyTo(bitmap);
// Release
bitmap.recycle();
allocationYuv.destroy();
allocationRgb.destroy();
rs.destroy();
image.close();
}
private ByteBuffer imageToByteBuffer(final Image image)
{
final Rect crop = image.getCropRect();
final int width = crop.width();
final int height = crop.height();
final Image.Plane[] planes = image.getPlanes();
final byte[] rowData = new byte[planes[0].getRowStride()];
final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);
int channelOffset = 0;
int outputStride = 0;
for (int planeIndex = 0; planeIndex < 3; planeIndex++)
{
if (planeIndex == 0)
{
channelOffset = 0;
outputStride = 1;
}
else if (planeIndex == 1)
{
channelOffset = width * height + 1;
outputStride = 2;
}
else if (planeIndex == 2)
{
channelOffset = width * height;
outputStride = 2;
}
final ByteBuffer buffer = planes[planeIndex].getBuffer();
final int rowStride = planes[planeIndex].getRowStride();
final int pixelStride = planes[planeIndex].getPixelStride();
final int shift = (planeIndex == 0) ? 0 : 1;
final int widthShifted = width >> shift;
final int heightShifted = height >> shift;
buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));
for (int row = 0; row < heightShifted; row++)
{
final int length;
if (pixelStride == 1 && outputStride == 1)
{
length = widthShifted;
buffer.get(output.array(), channelOffset, length);
channelOffset += length;
}
else
{
length = (widthShifted - 1) * pixelStride + 1;
buffer.get(rowData, 0, length);
for (int col = 0; col < widthShifted; col++)
{
output.array()[channelOffset] = rowData[col * pixelStride];
channelOffset += outputStride;
}
}
if (row < heightShifted - 1)
{
buffer.position(buffer.position() + rowStride - length);
}
}
}
return output;
}