问题
I am trying to overlay stickers on face using OpenCV and OpenGL.
I am getting the ByteBuffer inside the onDrawFrame:
@Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE) {
Log.d(TAG, "onDrawFrame tex=" + mTextureId);
}
mSurfaceTexture.updateTexImage();
mSurfaceTexture.getTransformMatrix(mSTMatrix);
byteBuffer.rewind();
GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, byteBuffer);
mat.put(0, 0, byteBuffer.array());
if (mCascadeClassifier != null) {
mFaces.empty();
mCascadeClassifier.detectMultiScale(mat, mFaces);
Log.d(TAG, "No. of faces detected : " + mFaces.toArray().length);
}
drawFrame(mTextureId, mSTMatrix);
}
My mat object is initialized in with camera preview width and height:
mat = new Mat(height, width, CvType.CV_8UC3);
The log return 0 face detections. I have two questions:
- What am I missing here for face detection using OpenCV?
- Also, how can I improve the performance/efficiency of video frame rendering and do the realtime face detection? because glReadPixels takes time to execute and slow down the rendering.
回答1:
You are calling glReadPixels()
on the GLES frame buffer before you've rendered anything. You'd need to do it after drawFrame()
if you were hoping to read back the SurfaceTexture rendering. You may want to consider rendering the texture offscreen to a pbuffer EGLSurface instead, and reading back from that.
There are a few different ways to get the pixel data from the Camera:
- Use the Camera byte[] APIs. Generally involves a software copy, so it tends to be slow.
- Send the output to an ImageReader. This gives you immediate access to the raw YUV data.
- Send the output to a SurfaceTexture, render the texture, read RGB data out with
glReadPixels()
(which is what I believe you are trying to do). This is generally very fast, but on some devices and versions of Android it can be slow.
来源:https://stackoverflow.com/questions/33368655/opengl-byte-buffer-with-opencv-face-detection