textures

How to visualize a depth texture in OpenGL?

此生再无相见时 提交于 2019-12-03 09:42:33
I'm working on a shadow mapping algorithm, and I'd like to debug the depth map that it's generating on its first pass. However, depth textures don't seem to render properly to the viewport. Is there any easy way to display a depth texture as a greyscale image, preferably without using a shader? You may need to change the depth texture parameters to display it as greyscale levels : glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE ) glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE ) You can then normally use the texture as a 'normal' greyscale 2d texture,

Can OpenGL ES render textures of non base 2 dimensions?

人走茶凉 提交于 2019-12-03 09:01:03
问题 This is just a quick question before I dive deeper into converting my current rendering system to openGL. I heard that textures needed to be in base 2 sizes in order to be stored for rendering. Is this true? My application is very tight on memory, but most of the bitmaps are not powers of two. Does storing non-base 2 textures consume more memory? 回答1: It's true depending on the OpenGL ES version, OpenGL ES 1.0/1.1 have the power of two restriction. OpenGL ES 2.0 doesn't have the limitation,

Haskell opengl texture GLFW

大城市里の小女人 提交于 2019-12-03 08:54:50
I have been trying to get some script that just displays a texture on a square using texcoords. If possible can you edit the script so that it works as from there I can workout how you did it as thats how I learn. import Control.Monad (unless, when) import Graphics.Rendering.OpenGL import qualified Graphics.UI.GLFW as G import System.Exit import System.IO import Texture import Data.IORef import Graphics.GLUtil import qualified Data.Set as Set main :: IO () main = do let errorCallback err description = hPutStrLn stderr description G.setErrorCallback (Just errorCallback) successfulInit <- G.init

What is the preferred way to show large images in OpenGL

最后都变了- 提交于 2019-12-03 07:52:50
问题 I've had this problem a couple of times. Let's say I want to display a splash-screen or something in an OpenGL context (or DirectX for that matter, it's more of a conceptual thing), now, I could either just load a 2048x2048 texture and hope that the graphics card will cope with it (most will nowadays I suppose), but growing with old-school graphics card I have this bad conscience leaning over me and telling me I shouldn't use textures that large. What is the preferred way nowadays? Is it to

Is it possible to bind a OpenCV GpuMat as an OpenGL texture?

五迷三道 提交于 2019-12-03 07:36:35
问题 I haven't been able to find any reference except for: http://answers.opencv.org/question/9512/how-to-bind-gpumat-to-texture/ which discusses a CUDA approach. Ideally I'd like to update an OpenGL texture with the contents of a cv::gpu::GpuMat without copying back to CPU, and without directly using CUDA (although i presume this may be necessary until this feature is added). 回答1: OpenCV has OpenGL support. See opencv2/core/opengl_interop.hpp header file. You can copy the content of GpuMat to

Convert 3D model to SceneJS JSON, including texture

ぐ巨炮叔叔 提交于 2019-12-03 07:36:30
问题 Motive I'm trying to create a small demo application using WebGL. I chose to use SceneJS, because it seemed an easy framework and would more than suffice for this purpose. I have downloaded a couple of .blend models (Amy Rose, amongst others) and exported them as a Collada (.dae) file using Blender. Then I used scenejs-pycollada to convert them to a json model. I just spent a couple of hours getting the scenejs-pycollada convertor to work. Apparently getting those Python dependencies to work

How to fill each side of a cube with different textures on OpenGL ES 1.1?

邮差的信 提交于 2019-12-03 07:04:16
Please, I need tutorials/code examples of how to fill each side of a cube with different textures on OpenGL ES 1.1 I found a lot of tutorials but none of them explain clearly how to put different textures in each face and none of them gives easy code examples of how to do it. My actual code (from nehe examples) that draws a cube with the same texture on each face: public class Cube { /** The buffer holding the vertices */ private FloatBuffer vertexBuffer; /** The buffer holding the texture coordinates */ private FloatBuffer textureBuffer; /** The buffer holding the indices */ private

How do you scale SKSpirteNode without anti aliasing

半世苍凉 提交于 2019-12-03 06:09:49
I am trying to scale an SKSpriteNode object without smoothing/anti-aliasing (I'm using pixel-art so it looks better pixelated). Is there a property I need to set to do this? This is the code I am using to render the sprite: SKTextureAtlas *atlas = [SKTextureAtlas atlasNamed:@"objects"]; SKTexture *f1 = [atlas textureNamed:@"hero_1.png"]; SKSpriteNode *hero = [SKSpriteNode spriteNodeWithTexture:f1]; [self addChild: hero]; hero.scale = 6.0f; The image is scaled correctly but blurry/smoothed out. This is hero_1.png . Try, SKTexture *texture = [SKTexture textureWithImageNamed:@"Fak4o"]; texture

OpenGL4.5 - bind multiple textures and samplers

六月ゝ 毕业季﹏ 提交于 2019-12-03 05:56:54
I'm trying to understand Textures, Texture Units and Samplers in OpenGL 4.5. I'm attaching a picture of what I'm trying to figure out. I think in my example everything is correct, but I am not so sure about the 1D Sampler on the right side with the question mark. So, I know OpenGL offers a number of texture units/binding points where textures and samplers can be bound so they work together. Each of these binding points can support one of each texture targets (in my case, I'm binding targets GL_TEXTURE_2D and GL_TEXTURE_1D to binding point 0 , and another GL_TEXTURE_2D to binding point 1 ).

setting up a CUDA 2D “unsigned char” texture for linear interpolation

倾然丶 夕夏残阳落幕 提交于 2019-12-03 03:49:00
I have a linear array of unsigned chars representing a 2D array. I would like to place it into a CUDA 2D texture and perform (floating point) linear interpolation on it, i.e., have the texture call fetch the 4 nearest unsigned char neighbors, internally convert them to float, interpolate between them, and return the resulting floating point value. I am having some difficulty setting up the texture and binding it to a texture reference. I have been through the CUDA reference manual & appendices, but I'm just not having any luck. Below is runnable code to set up and bind 1) a floating point