glteximage2d

pass data between shader programs

[亡魂溺海] 提交于 2020-08-10 19:21:12
问题 Ok I'm going to keep this as simple as possible. I want to pass data between shader programs. I'm using readPixels currently to do that but I feel it may be slowing operations down and I'm exploring faster options. what my program does: program1 does my rendering to the canvas. program2 does some wonderful operations in it's shaders that I want to pass to program1. MY QUESTIONS: is it possible to use the vbo from program2 and pass that to program1 for rendering? From what it sounds like in

glTexImage2D vs. gluBuild2DMipmaps

浪子不回头ぞ 提交于 2020-01-02 08:13:28
问题 Very basic OpenGL texture creation code: int width, height; BYTE * data; FILE * file; // open texture data file = fopen( filename, "rb" ); if ( file == NULL ) return 0; // allocate buffer width = 256; height = 256; data =(BYTE*) malloc( width * height * 3 ); // read texture data fread( data, width * height * 3, 1, file ); fclose( file ); glGenTextures( 1, &texture ); glBindTexture( GL_TEXTURE_2D, texture ); //gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width, height, GL_RGB, GL_UNSIGNED_BYTE, data )

What happens to pixels after passing them into glTexImage2D()?

半腔热情 提交于 2020-01-02 01:13:11
问题 If for example I create an array of pixels, like so: int *getPixels() { int *pixels = new int[10]; pixels[0] = 1; pixels[1] = 0; pixels[1] = 1; // etc... } glTexImage2D(..., getPixels()); Does glTexImage2D use that reference or copy the pixels into it's own memory? If the answer is the former, then should I do the following? int *p = getPixels(); glTexImage2D(..., p); /* Just changed to delete[], because delete * would only delete the first element! */ delete[] p; 回答1: From this quote in the

OpenGL glGetTexImage2d type parameter?

て烟熏妆下的殇ゞ 提交于 2019-12-12 17:02:12
问题 reading the docs i see that the glGetTexImage2d() function has a 'type' parameter. The docs say that the type parameter "specifies the data type of the pixel data" and gives some examples of types such as GL_INT, GL_BYTE, etc. but what does it mean precisely when the image format is GL_RGBA and GL_INT ? is it an int for each channel? or an int for the whole color? and if it's an int for th whole color, then isn't that really the same as GL_BYTE ? since there's 4 bytes in an int which makes

Fastest 2D frame rate possible with android NDK, my try included, better options available?

邮差的信 提交于 2019-12-12 09:05:15
问题 Fastest 2D frame rate possible with android NDK, my try included, better options available? I used the NDK and OpenGL ES 2.0 to display a frame as a texture on a GL_TRIANGLE_STRIP. This was done on a HTC Desire, same hardware as Nexus One. I tried to load multiple GL_RGBA textures and switch between the textures, because the normal fill rate with a single texture was disappointingly low: 1 texture: 4.78 fps 2 textures: 19.68 fps 3 textures: 20.18 fps 4 textures: 28.52 fps 5 textures: 29.01

How to properly use GL_HALF_FLOAT_OES for textures?

心不动则不痛 提交于 2019-12-11 09:31:28
问题 I'm using OpenGL ES 2.0 on an iPad 2 / 3. I'd like to use GL_FLOAT when creating textures: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_FLOAT, rawData); but the problem is that GL_LINEAR is not supported as a GL_TEXTURE_MAG_FILTER when GL_FLOAT is used if you don't have GL_OES_texture_float_linear showing up in your list of supported extensions. (None of the iPads do.) But I do have GL_OES_texture_half_float_linear in my list of extensions. So using a half-float

glTexImage2D failing in GLUT/FreeType example with OpenGL 3 and above

廉价感情. 提交于 2019-12-08 01:32:43
问题 I'm using the example found here: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Text_Rendering_01 The issue is that when I specify an OpenGL context with a version above 3, ie: glutInitContextVersion (3, 2); Calling glTexImage2D results in a GL_INVALID_ENUM (as reported by GLIntercept) error and the output of the example is blank. As far as I can tell from the OpenGL 3.3 reference, this should only be happening if type is not a valid type constant or target is an

glTexImage2D failing in GLUT/FreeType example with OpenGL 3 and above

我怕爱的太早我们不能终老 提交于 2019-12-06 14:02:12
I'm using the example found here: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Text_Rendering_01 The issue is that when I specify an OpenGL context with a version above 3, ie: glutInitContextVersion (3, 2); Calling glTexImage2D results in a GL_INVALID_ENUM (as reported by GLIntercept) error and the output of the example is blank. As far as I can tell from the OpenGL 3.3 reference, this should only be happening if type is not a valid type constant or target is an invalid target. Here is the offending line: glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, g->bitmap.width, g-

glPixelStorei 详解 包括像素传输

爱⌒轻易说出口 提交于 2019-12-06 12:04:23
3.glPixelStore 像glPixelStorei(GL_PACK_ALIGNMENT, 1)这样的调用,通常会用于像素传输(PACK/UNPACK)的场合。尤其是导入纹理(glTexImage2D)的时候: C++代码 glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(,,,, &pixelData); glPixelStorei(GL_UNPACK_ALIGNMENT, 4); 很明显地,它是在改变某个状态量,然后再Restore回来。——为什么是状态?你难道8知道OpenGL就是以状态机不?——什么状态?其实名字已经很直白了,glPixelStore这组函数要改变的是像素的存储格式。 涉及到像素在CPU和GPU上的传输,那就有个 存储格式 的概念。在本地内存中端像素集合是什么格式?传输到GPU时又是什么格式?格式会是一样么?在glTexImage2D这个函数中,包含两个关于颜色格式的参数,一个是纹理(GPU端,也可以说server端)的,一个是像素数据(程序内存上,也就是client端)的,两者是不一定一样的,哪怕一样也无法代表GPU会像内存那样去存储。或者想象一下,从一张硬盘上的图片提取到内存的像素数据,上传给GPU成为一张纹理,这个“纹理”还会是原来的那种RGBARGBA的一个序列完事么?显然不是的

glTexImage2D vs. gluBuild2DMipmaps

橙三吉。 提交于 2019-12-06 00:29:17
Very basic OpenGL texture creation code: int width, height; BYTE * data; FILE * file; // open texture data file = fopen( filename, "rb" ); if ( file == NULL ) return 0; // allocate buffer width = 256; height = 256; data =(BYTE*) malloc( width * height * 3 ); // read texture data fread( data, width * height * 3, 1, file ); fclose( file ); glGenTextures( 1, &texture ); glBindTexture( GL_TEXTURE_2D, texture ); //gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width, height, GL_RGB, GL_UNSIGNED_BYTE, data ); glTexImage2D( GL_TEXTURE_2D,0, GL_RGB, width, height,0, GL_RGB, GL_UNSIGNED_BYTE, data ); free( data