shader

Getting shadows to work in Three.js custom shader

こ雲淡風輕ζ 提交于 2019-12-01 19:17:27
I'm trying to get shadows to work in a custom shader in Three.js. I've tried to add these into my codes: In uniforms: THREE.UniformsLib["shadowmap"] In the fragment shader: THREE.ShaderChunk["shadowmap_pars_fragment"] THREE.ShaderChunk["shadowmap_fragment"] In the vertex shader: THREE.ShaderChunk["shadowmap_pars_vertex"] THREE.ShaderChunk["shadowmap_vertex"] which works. The object can receive shadows. However, it cannot cast shadows. Does anyone know what other bits of codes are needed? I believe that you need to mark each object as casting and receiving shadows I think its just obj

How can I compile asm shader to fxo file?

旧城冷巷雨未停 提交于 2019-12-01 17:52:49
问题 I have a compiled fxo shader which I'm trying to edit slightly (just adjusting some constants). Using fxdis (https://code.google.com/archive/p/fxdis-d3d1x/) I can disassemble this shader, this is the output: # DXBC chunk 0: RDEF offset 52 size 972 # DXBC chunk 1: ISGN offset 1032 size 80 # DXBC chunk 2: OSGN offset 1120 size 44 # DXBC chunk 3: SHEX offset 1172 size 1592 # DXBC chunk 4: STAT offset 2772 size 148 ps_5_0 dcl_global_flags refactoringAllowed dcl_constant_buffer cb0[30].xyzw,

Getting shadows to work in Three.js custom shader

有些话、适合烂在心里 提交于 2019-12-01 17:48:11
问题 I'm trying to get shadows to work in a custom shader in Three.js. I've tried to add these into my codes: In uniforms: THREE.UniformsLib["shadowmap"] In the fragment shader: THREE.ShaderChunk["shadowmap_pars_fragment"] THREE.ShaderChunk["shadowmap_fragment"] In the vertex shader: THREE.ShaderChunk["shadowmap_pars_vertex"] THREE.ShaderChunk["shadowmap_vertex"] which works. The object can receive shadows. However, it cannot cast shadows. Does anyone know what other bits of codes are needed? 回答1:

Noise Algorithm fails in Samsung Galaxy SIII (GLES)

最后都变了- 提交于 2019-12-01 16:58:50
I am struggling to get the next simple algorithm working in the Samsung Galaxy SIII float rand(vec2 co) { return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453); } .... vec3 color = texture2D(u_texture, v_texcoord); gl_FragColor.rgb = color + vec3(rand(gl_FragCoord.xy + time / 1000.0)); .... The code generates perfectly the expected noise in Samsung Galaxy S1 and Google Nexus S. But it fails completely in the new smartphone which uses ARM's Mali-400/MP4. Anyone can spot anything wrong with this algorithm? Or maybe understand why could it fail? Your problem likely comes from taking

Performance of different CG/GLSL/HLSL functions

北慕城南 提交于 2019-12-01 16:47:01
There are standard libraries of shader functions, such as for Cg . But are there resources which tell you how long each takes... I'm thinking similar to how you used to be able to look up how many cycles each ASM op would take. There are no reliable resources that will tell you how long various standard shader functions take. Not even for a particular piece of hardware. The reason for this has to do with instruction scheduling and the way modern shader architectures work. Take a simple sin function. Let's say that the hardware has a special hardware to compute the sine of a value, so it's not

Speed of cos() and sin() function in GLSL shaders?

时光总嘲笑我的痴心妄想 提交于 2019-12-01 15:45:35
I'm interested in information about the speed of sin() and cos() in Open GL Shader Language . The GLSL Specification Document indicates that: The built-in functions basically fall into three categories: ... ... They represent an operation graphics hardware is likely to accelerate at some point. The trigonometry functions fall into this category. EDIT: As has been pointed out, counting clock cycles of individual operations like sin() and cos() doesn't really tell the whole performance story. So to clarify my question, what I'm really interested in is whether it's worthwhile to optimize away sin

Are GPU shaders Turing complete

北城以北 提交于 2019-12-01 15:42:53
I understand that complete GPUs are behemoths of computing - including every step of calculation, and memory. So obviously a GPU can compute whatever we want - it's Turing complete. My question is in regard to a single shader on various GPUs ("Stream Processor"/"CUDA Core"): Is it Turing complete? Can I (in theory) compute an arbitrary function over arbitrary inputs by using a single shader? I'm trying to understand at what "scale" of computation shaders live. Kamil Czerski Did You mean shader as a program used to compute shading? On wiki talk I found: (...)Shader models 1.x and 2.0 are indeed

Are GPU shaders Turing complete

爱⌒轻易说出口 提交于 2019-12-01 15:21:01
问题 I understand that complete GPUs are behemoths of computing - including every step of calculation, and memory. So obviously a GPU can compute whatever we want - it's Turing complete. My question is in regard to a single shader on various GPUs ("Stream Processor"/"CUDA Core"): Is it Turing complete? Can I (in theory) compute an arbitrary function over arbitrary inputs by using a single shader? I'm trying to understand at what "scale" of computation shaders live. 回答1: Did You mean shader as a

Noise Algorithm fails in Samsung Galaxy SIII (GLES)

假如想象 提交于 2019-12-01 15:02:36
问题 I am struggling to get the next simple algorithm working in the Samsung Galaxy SIII float rand(vec2 co) { return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453); } .... vec3 color = texture2D(u_texture, v_texcoord); gl_FragColor.rgb = color + vec3(rand(gl_FragCoord.xy + time / 1000.0)); .... The code generates perfectly the expected noise in Samsung Galaxy S1 and Google Nexus S. But it fails completely in the new smartphone which uses ARM's Mali-400/MP4. Anyone can spot anything

Using a different vertex and fragment shader for each object in webgl

℡╲_俬逩灬. 提交于 2019-12-01 14:52:08
I have a scene with multiple objects in webgl. For each object I want to use a different vertex and a fragment shader. My first question is, is it possible to have a shader for each object? I am aware it is possible in opengl. This is something similar pseudo code of what I had in mind. Any example would be much appreciated. glenableshader draw triangle gldisableshader glenableshader draw square gldisableshader Thank you You can look up pretty much any WebGL example and turn it into a multiple shader example. Pseudo code // At init time for each shader program create and compile vertex shader