textures

Simplex noise vs Perlin noise

时光毁灭记忆、已成空白 提交于 2019-12-03 03:30:57
问题 I would like to know why Perlin noise is still so popular today after Simplex came out. Simplex noise was made by Ken Perlin himself and it was suppose to take over his old algorithm which was slow for higher dimensions and with better quality (no visible artifacts). Simplex noise came out in 2001 and over those 10 years I've only seen people talk of Perlin noise when it comes to generating heightmaps for terrains, creating procedural textures, et cetera. Could anyone help me out, is there

Custom Texture Shader in Three.js

北慕城南 提交于 2019-12-03 03:06:49
问题 I'm just looking to create a very simple Fragment Shader that draws a specified texture to the mesh. I've looked at a handful of custom fragment shaders that accomplished the same and built my own shaders and supporting JS code around it. However, it's just not working. Here's a working abstraction of the code I'm trying to run: Vertex Shader <script id="vertexShader" type="x-shader/x-vertex"> varying vec2 vUv; void main() { vUv = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4

Does it make sense to use own mipmap creation algorithm for OpenGL textures?

*爱你&永不变心* 提交于 2019-12-03 02:56:46
I was wondering if the quality of texture mipmaps would be better if I used my own algorithm for pre-generating them, instead of the built-in automatic one. I'd probably use a slow but pretty algorithm, like Lanczos resampling. Does it make sense? Will I get any quality gain on modern graphics cards? There are good reasons to generate your own mipmaps. However, the quality of the downsampling is not one of them. Game and graphic programmers have experimented with all kinds of downsampling algorithms in the past. In the end it turned out that the very simple "average four pixels"-method gives

The different addressing modes of CUDA textures

耗尽温柔 提交于 2019-12-03 02:51:14
I am using a CUDA texture in border addressing mode ( cudaAddressModeBorder ). I am reading texture coordinates using tex2D<float>() . When the texture coordinates fall outside the texture, tex2D<float>() returns 0 . How can I change this returned border value from 0 to something else? I could check the texture coordinate manually and set the border value myself. I was wondering if there was CUDA API where I can set such a border value. As mentioned by sgarizvi, CUDA supports only four, non-customizable address modes, namely, clamp , border , wrap and mirror , which are described in Section 3

sRGB textures. Is this correct?

你。 提交于 2019-12-03 02:02:57
问题 I've recently been reading a little about sRGB formats and how they allow the hardware to automatically perform colour correction for typical monitors. As part of my reading, I see that you can simulate this step with an ordinary texture and a pow function on the return result. Anyway I want to ask two questions as I've never used this feature before. Firstly, can anyone confirm from my screenshot that this is what you would expect to see? The left picture is ordinary RGBA and the right

SurfaceTexture updateTexImage to shared 2 EGLContexts - Problems on Android 4.4

老子叫甜甜 提交于 2019-12-03 00:50:59
I am referring to this excellent example of how to encode the preview frames of the camera directly into an mp4 file: http://bigflake.com/mediacodec/CameraToMpegTest.java.txt I have adopted the code in the way that I also would like to render the preview image on the screen. Therefore I got something like a GLTextureView with its own EGLContext. This Context is then used as shared EGLContext when I create the EGLContext for the encoder rendering: mEGLContext = EGL14.eglCreateContext(mEGLDisplay, configs[0], sharedContext == null ? EGL14.EGL_NO_CONTEXT : sharedContext, attrib_list, 0); In my

Can OpenGL ES render textures of non base 2 dimensions?

时光总嘲笑我的痴心妄想 提交于 2019-12-02 23:06:53
This is just a quick question before I dive deeper into converting my current rendering system to openGL. I heard that textures needed to be in base 2 sizes in order to be stored for rendering. Is this true? My application is very tight on memory, but most of the bitmaps are not powers of two. Does storing non-base 2 textures consume more memory? It's true depending on the OpenGL ES version, OpenGL ES 1.0/1.1 have the power of two restriction. OpenGL ES 2.0 doesn't have the limitation, but it restrict the wrap modes for non power of two textures. Creating bigger textures to match POT

What is the preferred way to show large images in OpenGL

我只是一个虾纸丫 提交于 2019-12-02 21:21:59
I've had this problem a couple of times. Let's say I want to display a splash-screen or something in an OpenGL context (or DirectX for that matter, it's more of a conceptual thing), now, I could either just load a 2048x2048 texture and hope that the graphics card will cope with it (most will nowadays I suppose), but growing with old-school graphics card I have this bad conscience leaning over me and telling me I shouldn't use textures that large. What is the preferred way nowadays? Is it to just cram that thing into video memory, tile it, or let the CPU do the work and glDrawPixels? Or

Is it possible to bind a OpenCV GpuMat as an OpenGL texture?

◇◆丶佛笑我妖孽 提交于 2019-12-02 21:06:10
I haven't been able to find any reference except for: http://answers.opencv.org/question/9512/how-to-bind-gpumat-to-texture/ which discusses a CUDA approach. Ideally I'd like to update an OpenGL texture with the contents of a cv::gpu::GpuMat without copying back to CPU, and without directly using CUDA (although i presume this may be necessary until this feature is added). OpenCV has OpenGL support. See opencv2/core/opengl_interop.hpp header file. You can copy the content of GpuMat to Texture: cv::gpu::GpuMat d_mat(768, 1024, CV_8UC3); cv::ogl::Texture2D tex; tex.copyFrom(d_mat); tex.bind(); //

Copy Texture to Texture

巧了我就是萌 提交于 2019-12-02 20:02:54
问题 I've done 2 programs to use Shared Resources, running on SlimDX & DirectX10. One program will display the shared texture on a 3D mesh. The 2nd program will load an image as texture. So far I need to pass the shared handled everytime the texture is update from a new image. Now, is there a way that I can initialize a fixed size shared texture (Texture2D), then everytime when I load a new image, all I need to do is load it as texture, then copy it to the existing texture. This way the shared