shader

Unity3D Sprite … but single sided?

强颜欢笑 提交于 2019-12-21 17:52:46
问题 Unity's excellent new Sprite s (to wit, Unity's excellent new sprites), among other worthy advantages, are in fact double-sided. In a 2D or 3D use case, you can flip the little bastards around and still see them from behind - they are rendered from both sides. I also love the unlit shader used on them (ie, Sprite-Default, not Sprite-Diffuse). However I have need for an old-fashioned single-sided Sprite . Fortunately, you can freely download the source of the excellent shader used by Unity ...

Shader optimization for retina screen on iOS

喜你入骨 提交于 2019-12-21 17:19:53
问题 I make a 3D iphone application which use many billboards. My frame buffer is twice larger on retina screen because I want to increase their quality on iPhone 4. The problem is that fragment shaders consume much more time due to the framebuffer size. Is there a way to manage retina screen and high definition textures without increase shader precision ? 回答1: If you're rendering with a framebuffer at the full resolution of the Retina display, it will have four times as many pixels to raster over

texturing using texelFetch()

旧巷老猫 提交于 2019-12-21 12:15:03
问题 When I pass non max values into texture buffer, while rendering it draws geometry with colors at max values. I found this issue while using glTexBuffer() API. E.g. Let’s assume my texture data is GLubyte, when I pass any value less than 255, then the color is same as that of drawn with 255, instead of mixture of black and that color. I tried on AMD and nvidia card, but the results are same. Can you tell me where could be going wrong? I am copying my code here: Vert shader: in vec2 a_position;

Enabling an extension on a Three.js shader

假如想象 提交于 2019-12-21 07:48:11
问题 How can I enable an extension on a Three.js shader? My code so far: getting extension: var domElement = document.createElement( 'canvas' ); var gl = domElement.getContext('webgl') || domElement.getContext('experimental-webgl'); gl.getExtension('OES_standard_derivatives'); on my shader: fragmentShader: [ "#extension GL_OES_standard_derivatives : enable", "code..." ]... The console output: WARNING: 0:26: extension 'GL_OES_standard_derivatives' is not supported ERROR: 0:32: 'dFdx' : no matching

Camera frame yuv to rgb conversion using GL shader language

荒凉一梦 提交于 2019-12-21 06:08:08
问题 I am getting the camera frame from the android camera Preview Callback in Byte array and pass it to jni code. As we can't use byte in c++ so i am converting it to the integer array as follows: JNIEXPORT void JNICALL Java_com_omobio_armadillo_Armadillo_onAndroidCameraFrameNative( JNIEnv* env, jobject, jbyteArray data, jint dataLen, jint width, jint height, jint bitsPerComponent) { Armadillo *armadillo = Armadillo::singleton(); jbyte *jArr = env->GetByteArrayElements(data, NULL); int dataChar

Compile GLSL shader asynchronously or in a different thread on Android

会有一股神秘感。 提交于 2019-12-21 05:47:10
问题 I'm writing an effect filter for Android devices, which has two-dimension loops in the fragment shader. For most of the devices, the shader can be compiled and run in reasonable time, but some of the devices takes several minutes to compile the shader at the first time. My fragment shader has a heavy two-dimension kernel convolution: const lowp int KERNEL_RADIUS = 19; .... for (int y = -KERNEL_RADIUS; y <= KERNEL_RADIUS; y++) { for (int x = -KERNEL_RADIUS; x <= KERNEL_RADIUS; x++) { .... } }

Render TMX Map on Threejs Plane

天涯浪子 提交于 2019-12-21 05:35:13
问题 Question updated with new code I am trying to write a WebGL shader that will draw a TMX Layer (exported from the Tiled editor). I am using THREE.js to create a Plane mesh and have the material be a ShaderMaterial that will draw the map on it. For those who don't know a tilemap exported by the Tiled editor as json will give a data attribute for each layer; it contains an array of numerical values, each of which is the tile index in the tileset like: "data": [5438, 5436, 5437, 5438, 5436, 5437,

How do I draw a mirror mirroring something in OpenGL?

寵の児 提交于 2019-12-21 05:24:13
问题 From my understanding is that to mirror in OpenGL, you basically draw the scene, then you flip everything over and draw it again, except only make it visible through the mirror, thus creating a perfectly flipped image in the mirror. But the problem I see, is that when doing this, the only mirrors that can see other mirrors are ones rendered after the previous mirrors. So if I render mirror 1 then mirror 2, mirror 1 can't see mirror 2, but mirror 2 can see mirror 1. How do I effectively mirror

OpenGL: Single vertex attribute for multiple vertices?

懵懂的女人 提交于 2019-12-21 03:53:26
问题 I have a vertex shader that accepts the following attributes: a_posCoord : vertex position a_texCoord : texture coordinate (passed to the fragment shader) a_alpha : transparency factor (passed to the fragment shader) The objects I'm rendering are all "billboards" (a pair of right triangles to make a rectangle). I'm using a single call to glDrawArrays to render many billboards, each which may have a unique alpha value. A single billboard has 6 vertices. Here's some pseudocode to illustrate how

How to properly clamp beckmann distribution

夙愿已清 提交于 2019-12-21 02:35:34
问题 I am trying to implement a Microfacet BRDF shading model (similar to the Cook-Torrance model) and I am having some trouble with the Beckmann Distribution defined in this paper: https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf Where M is a microfacet normal, N is the macrofacet normal and ab is a "hardness" parameter between [0, 1]. My issue is that this distribution often returns obscenely large values, especially when ab is very small. For instance, the Beckmann distribution is