hlsl

Calculating screen texture coordinates in CG/HLSL

蓝咒 提交于 2019-12-04 17:00:38
In OpenGL , sometimes when doing multi-pass rendering and post-processing I need to apply texels to the primitive's assembly fragments which are part of full screen texture composition.That is usually the case when the current pass comes from FBO texture to which the screen quad had been rendered during previous pass.To achieve this I calculate objects UV coordinates in SCREEN SPACE .In GLSL I calculate it like this: vec2 texelSize = 1.0 / vec2(textureSize(TEXTURE, 0)); vec2 screenTexCoords = gl_FragCoord.xy * texelSize; Now I am experimenting with Unity3D which uses CG /HLSL.The docs for

Octree raycasting/raytracing - best ray/leaf intersection without recursion

江枫思渺然 提交于 2019-12-04 13:47:34
问题 Could anyone provide a short & sweet explanation (or suggest a good tutorial) on how to cast a ray against a voxel octree without recursion? I have a complex model baked into an octree, and I need to find the best/closest leaf that intersects a ray. A standard drill-down iterative tree walk: Grab the root node Check for intersection No? Exit Yes? Find child that intersects the ray that is closest to the ray's origin Loop until I reach a leaf or exit the tree Always returns a leaf, but in

Normalizing from [0.5 - 1] to [0 - 1]

情到浓时终转凉″ 提交于 2019-12-04 07:32:00
问题 I'm kind of stuck here, I guess it's a bit of a brain teaser. If I have numbers in the range between 0.5 to 1 how can I normalize it to be between 0 to 1? Thanks for any help, maybe I'm just a bit slow since I've been working for the past 24 hours straight O_O 回答1: Others have provided you the formula, but not the work. Here's how you approach a problem like this. You might find this far more valuable than just knowning the answer. To map [0.5, 1] to [0, 1] we will seek a linear map of the

HLSL branch avoidance

为君一笑 提交于 2019-12-04 06:16:47
I have a shader where I want to move half of the vertices in the vertex shader. I'm trying to decide the best way to do this from a performance standpoint, because we're dealing with well over 100,000 verts, so speed is critical. I've looked at 3 different methods: (pseudo-code, but enough to give you the idea. The <complex formula> I can't give out, but I can say that it involves a sin() function, as well as a function call (just returns a number, but still a function call), as well as a bunch of basic arithmetic on floating point numbers). if (y < 0.5) { x += <complex formula>; } This has

Pack four bytes in a float

拈花ヽ惹草 提交于 2019-12-04 02:51:23
I'm writing a shader (HLSL), and I need to pack a color value into the R32 format. I've found various pieces of code for packing a float into the R8G8B8A8 format, but none of them seem to work in reverse. I'm targeting SM3.0, so (afaik) bit operations are not an option. To sum it up, I need to be able to do this: float4 color = ...; // Where color ranges from 0 -> 1 float packedValue = pack(color); Anyone know how to do this? UPDATE I've gotten some headway... perhaps this will help to clarify the question. My temporary solution is as such: const int PRECISION = 64; float4 unpack(float value)

How can I feed compute shader results into vertex shader w/o using a vertex buffer?

半腔热情 提交于 2019-12-03 17:14:21
Before I go into details I want outline the problem: I use RWStructuredBuffers to store the output of my compute shaders (CS). Since vertex and pixel shaders can’t read from RWStructuredBuffers, I map a StructuredBuffer onto the same slot (u0/t0) and (u4/t4): cbuffer cbWorld : register (b1) { float4x4 worldViewProj; int dummy; } struct VS_IN { float4 pos : POSITION; float4 col : COLOR; }; struct PS_IN { float4 pos : SV_POSITION; float4 col : COLOR; }; RWStructuredBuffer<float4> colorOutputTable : register (u0); // 2D color data StructuredBuffer<float4> output2 : register (t0); // same as u0

Multiple Render Targets not saving data

此生再无相见时 提交于 2019-12-03 15:58:08
I'm using SlimDX, targeting DirectX 11 with shader model 4. I have a pixel shader "preProc" which processes my vertices and saves three textures of data. One for per-pixel normals, one for per-pixel position data and one for color and depth (color takes up rgb and depth takes the alpha channel). I then later use these textures in a postprocessing shader in order to implement Screen Space Ambient Occlusion, however it seems none of the data is getting saved in the first shader. Here's my pixel shader: PS_OUT PS( PS_IN input ) { PS_OUT output; output.col = float4(0,0,0,0); output.norm = float4

HLSL mul() variables clarification

不打扰是莪最后的温柔 提交于 2019-12-03 15:01:12
The parameters for HLSL's mul( x, y) indicated here : say that if x is a vector, it is treated as a row vector. if y is a vector, it is treated as a column vector. Does this then follow through meaning that: a. if x is a vector, y is treated as a row-major matrix if y is a vector, x is treated as a column-major matrix b. since ID3DXBaseEffect::SetMatrix() passes in a row-major matrix , hence I'd use the matrix passed into the shader in following order: ex. Output.mPosition = mul( Input.mPosition, SetMatrix()value ); ? I'm just starting out with shaders and current relearning my matrix math. It

Matrix multiplication - view/projection, world/projection, etc

拜拜、爱过 提交于 2019-12-03 13:48:19
问题 In HLSL there's a lot of matrix multiplication and while I understand how and where to use them I'm not sure about how they are derived or what their actual goals are. So I was wondering if there was a resource online that explains this, I'm particularly curious about what is the purpose behind multiplying a world matrix by a view matrix and a world+view matrix by a projection matrix. 回答1: You can get some info, from a mathematical viewpoint, on this wikipedia article or on msdn. Essentially,

Matrix multiplication - view/projection, world/projection, etc

陌路散爱 提交于 2019-12-03 04:47:12
In HLSL there's a lot of matrix multiplication and while I understand how and where to use them I'm not sure about how they are derived or what their actual goals are. So I was wondering if there was a resource online that explains this, I'm particularly curious about what is the purpose behind multiplying a world matrix by a view matrix and a world+view matrix by a projection matrix. You can get some info, from a mathematical viewpoint, on this wikipedia article or on msdn . Essentially, when you render a 3d model to the screen, you start with a simple collection of vertices scattered in 3d