shader

Images and mask in OpenGL ES 2.0

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-18 05:45:09
问题 I'm learning OpenGL ES 2.0 and I'd like to create an App to better understand how it works. The App has a set of filter that the user can apply on images (I know, nothing new :P). One of this filter takes two images and a mask and it mixes the two images showing them through the mask (here an image to better explain what I want to obtain) At the moment I'm really confused and I don't know where to start to create this effect. I can't understand wether I have to work with multiple textures and

How to implement MeshNormalMaterial in THREE.js by GLSL?

故事扮演 提交于 2019-12-18 05:12:32
问题 I want to implement a shader like MeshNormalMaterial, but I have no idea how to convert normal to color. In THREE.js: My test1: varying vec3 vNormal; void main(void) { vNormal = abs(normal); gl_Position = matrix_viewProjection * matrix_model * vec4(position, 1.0); } varying vec3 vNormal; void main(void) { gl_FragColor = vec4(vNormal, 1.0); } My test2: varying vec3 vNormal; void main(void) { vNormal = normalize(normal) * 0.5 + 0.5; gl_Position = matrix_viewProjection * matrix_model * vec4

Shader之小白入门学习二

吃可爱长大的小学妹 提交于 2019-12-18 03:41:51
Shader之小白入门学习二 前言 据说Shader很难?咱也别管什么鬼图形学,鬼教学,乱七八糟的东西,直接就在Unity中干上我们的Shader,一步一步学习了解并深入征服它! Shader应该由谁来做 这是个很有趣的话题,Shader到底应该由谁来做呢?通常情况下有此技能的人员有: 图形程序员 一小部分程序 一小小部分美术 技术美术 一般的项目团队,如果不是自研引擎的话,很少会配置图形程序员,而且图形程序员这么高大上的人做Shader似乎又有种浪费的感觉,所以我们先忽略他。 程序呢,每个团队中总有那么一两个有两把刷子的,但是苦于自身美感的问题,最后在不断与美术沟通交流的过程中放弃自我。 美术呢,难得碰到个会做Shader的,效果也是华丽的,但是真的敢用吗? 这是我们的主角,技术美术登场了,也是我梦想的职位,及程序与美术于一身的美男子。效果华丽的,性能也稳妥的! 所以说如果你是技术美术的话,一定要美术对效果满意,程序对性能放心,在这其中找到效果与性能最佳平衡点,这才是我们制作Shader的优势,而不仅仅是我们能做! Shader模板 现在让我们开始创建我们的第一个Shader,打开Unity,然后再Project面板点击右键,一次从中选择Create/Shader/… 然后会发现有几个选项: Standard Surface Shader 标准表面着色器,是一种基于物理的着色系统

HLSL: Enforce Constant Register Limit at Compile Time

三世轮回 提交于 2019-12-17 21:17:42
问题 In HLSL, is there any way to limit the number of constant registers that the compiler uses? Specifically, if I have something like: float4 foobar[300]; In a vs_2_0 vertex shader, the compiler will merrily generate the effect with more than 256 constant registers. But a 2.0 vertex shader is only guaranteed to have access to 256 constant registers, so when I try to use the effect, it fails in an obscure and GPU-dependent way at runtime. I would much rather have it fail at compile time. This

Curved Frosted Glass Shader?

若如初见. 提交于 2019-12-17 21:16:13
问题 Well making something transparent isn't that difficult, but i need that transparency to be different based on an object's curve to make it look like it isn't just a flat object. Something like the picture below. The center is more transparent than the sides of the cylinder, it is more black which is the background color. Then there is the bezel which seems to have some sort of specular lighting at the top to make it more shiny, but i'd have no idea how to go about that transparency in that

Shader for Android OpenGL ES

偶尔善良 提交于 2019-12-17 18:19:00
问题 Is it possible to use vertex or pixel shaders in android app? please give an example if possible of setting up shader. 回答1: If you're targetting andriod 2.x / OpengL ES 2, then yes, it's possible. Here is a code example of how to load a shader: public int createProgram(String vertexSource, String fragmentSource) { int vertexShader = loadShader(GLES20.GL_VERTEX_SHADER, vertexSource); int pixelShader = loadShader(GLES20.GL_FRAGMENT_SHADER, fragmentSource); int program = GLES20.glCreateProgram()

OpenGL - How to access depth buffer values? - Or: gl_FragCoord.z vs. Rendering depth to texture

筅森魡賤 提交于 2019-12-17 17:58:11
问题 I want to access the depth buffer value at the currently processed pixel in a pixel shader. How can we achieve this goal? Basically, there seems to be two options: Render depth to texture. How can we do this and what is the tradeoff? Use the value provided by gl_FragCoord.z - But: Is this the correct value? 回答1: On question 1: You can't directly read from the depth buffer in the fragment shader (unless there are recent extensions I'm not familiar with). You need to render to a Frame Buffer

GLSL gl_FragCoord.z Calculation and Setting gl_FragDepth

China☆狼群 提交于 2019-12-17 17:38:07
问题 So, I've got an imposter (the real geometry is a cube, possibly clipped, and the imposter geometry is a Menger sponge) and I need to calculate its depth. I can calculate the amount to offset in world space fairly easily. Unfortunately, I've spent hours failing to perturb the depth with it. The only correct results I can get are when I go: gl_FragDepth = gl_FragCoord.z Basically, I need to know how gl_FragCoord.z is calculated so that I can: Take the inverse transformation from gl_FragCoord.z

How to render clipped surfaces as solid objects

自古美人都是妖i 提交于 2019-12-17 11:13:32
问题 In Three.js, I have a 3d object where I am using local clipping planes to only render a part of the object. However, since 3d objects are "hollow" (meaning only the outer surface is rendered), when we clip anything off that surface we can "see into" the object. Here's an example of what I mean, clipping a corner off a cube. Notice how we can see the backside of the opposite corner. I would like to give the appearance of the object being solid. Based on this issue, it seems that the best way

How to render clipped surfaces as solid objects

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-17 11:13:10
问题 In Three.js, I have a 3d object where I am using local clipping planes to only render a part of the object. However, since 3d objects are "hollow" (meaning only the outer surface is rendered), when we clip anything off that surface we can "see into" the object. Here's an example of what I mean, clipping a corner off a cube. Notice how we can see the backside of the opposite corner. I would like to give the appearance of the object being solid. Based on this issue, it seems that the best way