shader

draw the depth value in opengl using shaders

為{幸葍}努か 提交于 2019-11-27 10:57:21
问题 I want to draw the depth buffer in the fragment shader, I do this: Vertex shader: varying vec4 position_; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; position_ = gl_ModelViewProjectionMatrix * gl_Vertex; Fragment shader: float depth = ((position_.z / position_.w) + 1.0) * 0.5; gl_FragColor = vec4(depth, depth, depth, 1.0); But all I print is white, what am I doing wrong? 回答1: In what space do you want to draw the depth? If you want to draw the window-space depth, you can do this:

unity模型法线可视化shader

给你一囗甜甜゛ 提交于 2019-11-27 10:22:46
// Upgrade NOTE: replaced '_World2Object' with 'unity_WorldToObject' // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)' // Upgrade NOTE: replaced '_Object2World' with 'unity_ObjectToWorld' // Upgrade NOTE: replaced '_World2Object' with 'unity_WorldToObject' // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)' Shader "Custom/ShowNormals" { Properties { _LineLength("LineLength",float) = 0.03 _LineColor("LineColor",COLOR) = (1,0,0,1) } SubShader { Pass { Tags { "RenderType" = "Opaque" } LOD 200 CGPROGRAM #pragma target 5.0 #pragma

Unity Surface Shader 示例分析

好久不见. 提交于 2019-11-27 08:39:41
Unity Surface Shader 示例分析 对于Unity中的表面着色器(Surface Shader),它的代码整体结构如下所示: Shader "name" { Properties { // 第一部分 } SubShader { // 第二部分 } Fallback "Diffuse" // 第三部分 } 第一部分 Properties 数据块 它的作用是充当数据的接口,将外部的数据(资源)引入进来,以供着色器内部使用。在这里,我们可以定义的数据类型如下所示: (1) _MainTex ( "Base (RGB)", 2D ) = "white" {} 2D类型,主要指我们所使用的纹理贴图。 _MainTex是自定义的变量名称,我们在编写shader的代码时将使用它,它会显示在shader的属性面板上,下同; "Base (RGB)"也是一个名称,它会在绑定该shader的材质属性面板上显示。 (2) _CubeTex ( "3D Map", Cube ) = "white" {} Cube类型,表示三维纹理。三维纹理具有宽度、高度以及深度三个维度的信息。 (3) _Color ( "Main Color", Color ) = (1,1,1,1) Color类型,表示一种颜色,包含( R, G, B, A )4个颜色值成分。 _Color是自定义的变量名称

GLSL - Weird syntax error “<”

我怕爱的太早我们不能终老 提交于 2019-11-27 07:11:16
问题 I'm trying to use a shader but it keeps telling me this error on both fragment and vertex shader: error(#132) Syntax error: "<" parse error vertex shader varying vec4 diffuse; varying vec4 ambient; varying vec3 normal; varying vec3 halfVector; void main() { normal = normalize(gl_NormalMatrix * gl_Normal); halfVector = gl_LightSource[0].halfVector.xyz; diffuse = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse; ambient = gl_FrontMaterial.ambient * gl_LightSource[0].ambient; ambient += gl

Fragment Shader - Average Luminosity

梦想与她 提交于 2019-11-27 07:02:59
问题 Does any body know how to find average luminosity for a texture in a fragment shader? I have access to both RGB and YUV textures the Y component in YUV is an array and I want to get an average number from this array. 回答1: I recently had to do this myself for input images and video frames that I had as OpenGL ES textures. I didn't go with generating mipmaps for these due to the fact that I was working with non-power-of-two textures, and you can't generate mipmaps for NPOT textures in OpenGL ES

Pixel-perfect shader in Unity ShaderLab

浪子不回头ぞ 提交于 2019-11-27 06:42:29
问题 In Unity, when writing shaders, is it possible for the shader itself to "know" what the screen resolution is, and indeed for the shader to control single physical pixels? I'm thinking only of the case of writing shaders for 2D objects (such as for UI use, or at any event with an ortho camera). (Of course, normally to show a physical-pixel perfect PNG on screen, you merely have a say 400 pixel PNG, and you arrange scaling so that the shader, happens to be drawing to, precisely 400 physical

360 viewer in unity, texture appears warped in the top and bottom

邮差的信 提交于 2019-11-27 06:21:29
问题 I am making a 360 viewer in unity, to view a 360 photo I used to have a cubemap attached to a skybox, and it worked great. But the weight of the cubemaps forced me to switch to textures. All of the 360 viewer tutorials say to just put a sphere with a shader on it, and put the camera inside. When I do this, it doesn't work very well, because when I look to the top or bottom, I see the image warped like so: (The chairs are suppossed to look normal) It did not happen when I used a skybox. Does

Does If-statements slow down my shader?

妖精的绣舞 提交于 2019-11-27 06:08:43
I want to know if "If-statements" inside shaders (vertex / fragment / pixel...) are realy slowing down the shader performance. For example: Is it better to use this: vec3 output; output = input*enable + input2*(1-enable); instead of using this: vec3 output; if(enable == 1) { output = input; } else { output = input2; } in another forum there was a talk about that (2013): http://answers.unity3d.com/questions/442688/shader-if-else-performance.html Here the guys are saying, that the If-statements are realy bad for the performance of the shader. Also here they are talking about how much is inside

翻书shader

谁都会走 提交于 2019-11-27 05:32:28
//把下面的shader挂载到plane上,调节_Angle Shader "Unlit/PageTurning" { Properties { _Color ("Color", Color) = (1,1,1,1) _MainTex("MainTex",2D)="White"{} _SecTex("SecTex",2D)="White"{} _Angle("Angle",Range(0,180))=0 _Warp("Warp",Range(0,10))=0 _WarpPos("WarpPos",Range(0,1))=0 _Downward("Downward",Range(0,1))=0 } SubShader { pass//正面 { Cull Back CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct v2f { float4 pos : POSITION; float2 uv : TEXCOORD0; }; fixed4 _Color; float _Angle; float _Warp; float _Downward; float _WarpPos; sampler2D _MainTex; float4 _MainTex_ST; v2f vert

How to render Android's YUV-NV21 camera image on the background in libgdx with OpenGLES 2.0 in real-time?

橙三吉。 提交于 2019-11-27 02:53:13
Unlike Android, I'm relatively new to GL/libgdx. The task I need to solve, namely rendering the Android camera's YUV-NV21 preview image to the screen background inside libgdx in real time is multi-faceted. Here are the main concerns: Android camera's preview image is only guaranteed to be in the YUV-NV21 space (and in the similar YV12 space where U and V channels are not interleaved but grouped). Assuming that most modern devices will provide implicit RGB conversion is VERY wrong, e.g the newest Samsung Note 10.1 2014 version only provides the YUV formats. Since nothing can be drawn to the