问题
So, I am implementing a screen to apply Effects (grain, negative, etc) to an image a user has taken from their camera or an image taken from their gallery. I am able to take the image that they have selected or taken and display it to them through the use of OpenGL at full resolution ( or scaled down maintaining aspect ratio depending on the max texture size of their device and the size of the image). Also, selecting the effect and applying to the texture works completely fine. As does taking the image and making a bunch of little preview thumbnails with that particular effect applied to it at 60x60 size. The way I do this is by using a FrameBuffer to render the image and saving it immediately to a Bitmap using glReadPixels.
The problem I am having is this: I want to save the image with the selected filter applied to it as at the same dimensions as what is being display. So I use the same algorithm I use with saving the thumbnails but with the full image size instead of just 60x60. But when i do this, the bitmap that is saved is just the lower left corner of the image, and the rest is just a black screen. However, when I change the dimensions from say 3072x4096 to 1080x1920, the bitmap is drawn and saved correctly. I believe that this has to do with the dimensions of the device screen preventing glReadPixels from reading the full size since it is bigger then the device screen.
Does any have any insight into how to resolve this exactly? Or can explain to me why it is that the program behaves in this way.
Thank you for your help.
int[] mTextures = new int[2];
//... I set the Effect I want, this all works fine so I've ommited it for clarity...///
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0) // Bind Default Frame buffer
/// setup up program and everything works
GLES20.glViewPort(0,0,mTexWdith, mTexHeight) // should be something along 3160x4096, which is mImageHeight and mImageWdith
// COntinue with normal rendering///
GLES20.ActiveTextture(GLES20.GL_TEXTURE0)
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextures[1];
After this step is where I use the glesReadPixels This method looks like this:
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0) // is this the correct binding?
//allocate byte buffer here. Works fine
GLES20.glReadPixels(0,0,mImageWidth, mImageHeight, GLES20.GL_RGBA,GLES.INSIGNED, byteBuffer)
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GLToolBox.checkGlError("store Pixels");
pixelBuffer.rewind();
//set bitmap and use matrix to account for the image being vertically flipped
Bitmap bm = Bitmap.createBitmap(mImageWidth, mImageHeight, Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(pixelBuffer);
Matrix m = new Matrix();
m.setScale(-1, 1);
m.preRotate(180);
bm = Bitmap.createBitmap(bm,0,0,mImageWidth, mImageHeight, m, false);
Then I continue to save the image here into a file///
回答1:
Derhass was correct that I was using the default FBO to render to a texture instead of creating my own Frame Buffer Object. When using the default FBO I could never read more pixels than what the device screen could display.(which resulted in just the pixels starting from coords (0,0) to (mScreenWidth,mScreenHeight) being displayed while the rest of the image's size was just black).
By generating your own FBO and settings its viewport to the desired width and height you can read pixels from the bound frame Buffer with glReadPixels as large as the devices maximum texture size.
来源:https://stackoverflow.com/questions/32312028/opengl-es-readpixels-to-bitmap-from-texture-larger-than-screen