textures

Using QImage with OpenGL

ⅰ亾dé卋堺 提交于 2019-11-28 08:38:15
I've very recently picked up Qt and am using it with OpenGL The thing though is that when moving my SDL code to Qt and changing the texture code to use QImage it stops working. The image does load correctly, as shown through the error checking code. Thanks! P.S: Please don't suggest I use glDrawPixels, I need to fix the problem at hand. Some of the reasons for that being 1. slow 2. android (which this code may be running on eventually) is OpenGL ES and does not support glDrawPixels Here's the code: //original image QImage img; if(!img.load(":/star.png")) { //loads correctly qWarning("ERROR

Retrieve Vertices Data in THREE.js

試著忘記壹切 提交于 2019-11-28 07:48:16
I'm creating a mesh with a custom shader. Within the vertex shader I'm modifying the original position of the geometry vertices. Then I need to access to this new vertices position from outside the shader, how can I accomplish this? In lieu of transform feedback (which WebGL 1.0 does not support), you will have to use a passthrough fragment shader and floating-point texture (this requires loading the extension OES_texture_float ). That is the only approach to generate a vertex buffer on the GPU in WebGL. WebGL does not support pixel buffer objects either, so reading the output data back is

Are array textures related to sampler arrays?

二次信任 提交于 2019-11-28 07:28:09
问题 OpenGL has array textures, denoted in shaders by specific sampler types: sampler2DArray array_texture; But GLSL also allows samplers to be aggregated into arrays: sampler2D array_of_textures[10]; Are these two features related to each other? How are they different? 回答1: Let's understand the distinction by analogy. Samplers in GLSL are like pointers in C++; they reference some other object of a given type. So consider the following C++ code: int* pi; std::array<int, 5>* pai; std::array<int*, 5

Doing readback from Direct3D textures and surfaces

依然范特西╮ 提交于 2019-11-28 06:56:48
I need to figure out how to get the data from D3D textures and surfaces back to system memory. What's the fastest way to do such things and how? Also if I only need one subrect, how can one read back only that portion without having to read back the entire thing to system memory? In short I'm looking for concise descriptions of how to copy the following to system memory : a texture a subset of a texture a surface a subset of a surface a D3DUSAGE_RENDERTARGET texture a subset of a D3DUSAGE_RENDERTARGET texture This is Direct3D 9, but answers about newer versions of D3D would be appreciated too.

How to apply texture to glutSolidCube

倾然丶 夕夏残阳落幕 提交于 2019-11-28 06:56:30
I can find tutorials about mapping textures to polygons specifying vertices etc. but nothing regarding how to apply a texture to a cube (or other stuff) drawn with glut (glutSolidCube). I am doing something like: glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, decal); glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, repeat); glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, repeat); glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, nearest); glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, nearest); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, 0, 4,

iPhone OpenGL ES 2.0 - Pixel Perfect Textures

爷,独闯天下 提交于 2019-11-28 06:39:05
How can I get my textures to align with the screen pixels for pixel perfect graphics in OpenGL ES 2.0? This is critical for a project I'm working on which uses pixel art graphics. Any help on this would be great! datenwolf See my answer here: OpenGL Texture Coordinates in Pixel Space This has been asked a few times, but I don't have the links at hand, so a quick and rough explanation. Let's say the texture is 8 pixels wide: | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | ^ ^ ^ ^ ^ ^ ^ ^ ^ 0.0 | | | | | | | 1.0 | | | | | | | | | 0/8 1/8 2/8 3/8 4/8 5/8 6/8 7/8 8/8 The digits denote the texture's pixels, the

Is it possible using video as texture for GL in iOS?

我的梦境 提交于 2019-11-28 06:34:17
Is it possible using video (pre-rendered, compressed with H.264) as texture for GL in iOS? If possible, how to do it? And any playback quality/frame-rate or limitations? As of iOS 4.0, you can use AVCaptureDeviceInput to get the camera as a device input and connect it to a AVCaptureVideoDataOutput with any object you like set as the delegate. By setting a 32bpp BGRA format for the camera, the delegate object will receive each frame from the camera in a format just perfect for handing immediately to glTexImage2D (or glTexSubImage2D if the device doesn't support non-power-of-two textures; I

Is it possible to use a 2d canvas as a texture for a cube?

北城余情 提交于 2019-11-28 05:35:37
问题 I want to add images to one face of a cube, possibly using a 2d canvas element as the face texture. Here is my code, but I can't get the result I want. The face using the canvas as a texture is blank, the other faces use a THREE.ImageUtils.loadTexture and they're fine. var renderer, camera, scene; var canvas = document.createElement("canvas"); var context = canvas.getContext("2d"); var image0 = new Image(); var image1 = new Image(); image0.onload = function() { context.drawImage(image0, 0, 0)

How to manipulate texture content on the fly?

痞子三分冷 提交于 2019-11-28 03:42:54
I have an iPad app I am working on and one possible feature that we are contemplating is to allow the user to touch an image and deform it. Basically the image would be like a painting and when the user drags their fingers across the image, the image will deform and the pixels that are touched will be "dragged" along the image. Sorry if this is hard to understand, but the bottom line is that we want to edit the content of the texture on the fly as the user interacts with it. Is there an effective technique for something like this? I am trying to get a grasp of what would need to be done and

ffmpeg video to opengl texture

雨燕双飞 提交于 2019-11-28 03:15:47
I'm trying to render frames grabbed and converted from a video using ffmpeg to an OpenGL texture to be put on a quad. I've pretty much exhausted google and not found an answer, well I've found answers but none of them seem to have worked. Basically, I am using avcodec_decode_video2() to decode the frame and then sws_scale() to convert the frame to RGB and then glTexSubImage2D() to create an openGL texture from it but can't seem to get anything to work. I've made sure the "destination" AVFrame has power of 2 dimensions in the SWS Context setup. Here is my code: SwsContext *img_convert_ctx = sws