Options to efficiently draw a stream of byte arrays to display in Android

天大地大妈咪最大 提交于 2019-12-04 10:51:29
WLGfx

I'm using ffmpeg for my project, but the principal for rendering the YUV frame should be the same for yourself.

If a frame, for example, is 756 x 576, then the Y frame will be that size. The U and V frame are half the width and height of the Y frame, so you will have to make sure you account for the size differences.

I don't know about the camera API, but the frames I get from a DVB source have a width and also each line has a stride. Extras pixels at the end of each line in the frame. Just in case yours is the same, then account for this when calculating your texture coordinates.

Adjusting the texture coordinates to account for the width and stride (linesize):

float u = 1.0f / buffer->y_linesize * buffer->wid; // adjust texture coord for edge

The vertex shader I've used takes screen coordinates from 0.0 to 1.0, but you can change these to suit. It also takes in the texture coords and a colour input. I've used the colour input so that I can add fading, etc.

Vertex shader:

#ifdef GL_ES
precision mediump float;
const float c1 = 1.0;
const float c2 = 2.0;
#else
const float c1 = 1.0f;
const float c2 = 2.0f;
#endif

attribute vec4 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_colorin;
varying vec2 v_texcoord;
varying vec4 v_colorout;



void main(void)
{
    v_texcoord = a_texcoord;
    v_colorout = a_colorin;

    float x = a_vertex.x * c2 - c1;
    float y = -(a_vertex.y * c2 - c1);

    gl_Position = vec4(x, y, a_vertex.z, c1);
}

The fragment shader which takes three uniform textures, one for each Y, U and V framges and converts to RGB. This also multiplies by the colour passed in from the vertex shader:

#ifdef GL_ES
precision mediump float;
#endif

uniform sampler2D u_texturey;
uniform sampler2D u_textureu;
uniform sampler2D u_texturev;
varying vec2 v_texcoord;
varying vec4 v_colorout;

void main(void)
{
    float y = texture2D(u_texturey, v_texcoord).r;
    float u = texture2D(u_textureu, v_texcoord).r - 0.5;
    float v = texture2D(u_texturev, v_texcoord).r - 0.5;
    vec4 rgb = vec4(y + 1.403 * v,
                    y - 0.344 * u - 0.714 * v,
                    y + 1.770 * u,
                    1.0);
    gl_FragColor = rgb * v_colorout;
}

The vertices used are in:

float   x, y, z;    // coords
float   s, t;       // texture coords
uint8_t r, g, b, a; // colour and alpha

Hope this helps!

EDIT:

For NV12 format you can still use a fragment shader, although I've not tried it myself. It takes in the interleaved UV as a luminance-alpha channel or similar.

See here for how one person has answered this: https://stackoverflow.com/a/22456885/2979092

I took several answers from SO and various articles plus @WLGfx's answer above to come up with this:

I created two byte buffers, one for Y and one for the UV part of the texture. Then converted the byte buffers to textures using

public static int createImageTexture(ByteBuffer data, int width, int height, int format, int textureHandle) {
    if (GLES20.glIsTexture(textureHandle)) {
        return updateImageTexture(data, width, height, format, textureHandle);
    }
    int[] textureHandles = new int[1];

    GLES20.glGenTextures(1, textureHandles, 0);
    textureHandle = textureHandles[0];
    GlUtil.checkGlError("glGenTextures");

    // Bind the texture handle to the 2D texture target.
    GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);

    // Configure min/mag filtering, i.e. what scaling method do we use if what we're rendering
    // is smaller or larger than the source image.
    GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
    GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
    GlUtil.checkGlError("loadImageTexture");

    // Load the data from the buffer into the texture handle.
    GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, format, width, height,
            0, format, GLES20.GL_UNSIGNED_BYTE, data);
    GlUtil.checkGlError("loadImageTexture");

    return textureHandle;
}

Both these textures are then sent as normal 2D textures to the glsl shader:

precision highp float;
varying vec2 vTextureCoord;
uniform sampler2D sTextureY;
uniform sampler2D sTextureUV;
uniform float sBrightnessValue;
uniform float sContrastValue;
void main (void) {
float r, g, b, y, u, v;
    // We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
    // that's why we're pulling it from the R component, we could also use G or B
    y = texture2D(sTextureY, vTextureCoord).r;
    // We had put the U and V values of each pixel to the A and R,G,B components of the
    // texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread
    // in the texture, this is probably the fastest way to use them in the shader
    u = texture2D(sTextureUV, vTextureCoord).r - 0.5;
    v = texture2D(sTextureUV, vTextureCoord).a - 0.5;
    // The numbers are just YUV to RGB conversion constants
    r = y + 1.13983*v;
    g = y - 0.39465*u - 0.58060*v;
    b = y + 2.03211*u;
    // setting brightness/contrast
    r = r * sContrastValue + sBrightnessValue;
    g = g * sContrastValue + sBrightnessValue;
    b = b * sContrastValue + sBrightnessValue;
    // We finally set the RGB color of our pixel
    gl_FragColor = vec4(r, g, b, 1.0);
}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!