hlsl

NSight Graphics Debugging cannot start

自闭症网瘾萝莉.ら 提交于 2020-04-30 07:43:07
问题 I am trying to debug a HLSL shader in VS2012 using NSight, but it can't start. When I click on "Start Graphics Debugging", it seems like it starts the app for a moment, and then closes it (output windows from NSight shows several "shader loaded"/"shader unloaded" lines). Windows Event log doesn't show anything (except "NVIDIA Network Service" failing to start, but if I understood well, this is something related to updates). On the other hand, if I start GPU Perfomannce analysis, then it runs

Calculating world space coordinates in the pixel shader

孤人 提交于 2020-03-21 19:25:22
问题 I have a pixel shader and I want to calculate the position of each pixel in terms of my world space coordinates. How would I do this? What would I need? I have a ps_input structure which has a float4 position : SV_POSITION . I'm assuming this is important, but the values stored inside seems to be kind of funny. I can't seem to figure out what they relate to. For instance, if a pixel is 2d, how come it has a w component, or a z component for that matter? I'm using DirectX and the pixel shader

(How) can a shader view the current render-buffer?

拈花ヽ惹草 提交于 2020-01-30 09:12:06
问题 Is it possible for a pixel shader to see the current state of the depth/color/stencil buffer? 回答1: A fragment shader is not given the current buffer values for the fragment it is working on. Attempts to read these values, by using those buffers as textures, will not in the general case produce reasonable results. It's "undefined behavior." There are certain specific cases where it can work. First, you can use texture barriers. That is technically an NVIDIA extension, but ATI supports it

HLSL Integer Texture Coordinates

北城以北 提交于 2020-01-16 00:44:46
问题 I'm trying to interpolate between integer pixel coordinates instead of between 0-1, because I'm using point sampling, so I'm not interested in fractions of pixels, but the texture coordinates are still coming into the pixel shader as float2 even though the data type is int2. pixelSize is 1 divided by texture size matrix WorldViewProjection; float2 pixelSize; Texture2D SpriteTexture; sampler2D SpriteTextureSampler = sampler_state { Texture = <SpriteTexture>; AddressU = clamp; AddressV = clamp;

DirectX 11 - Compute shader: Writing to an output resource

十年热恋 提交于 2020-01-12 02:21:13
问题 I've just started using the Compute shader stage in DirectX 11 and encountered some unwanted behaviour when writing to an output resource in the Compute shader. I seem to get only zeroes as output which, to my understanding, means that out-of-bound reads has been performed in the Compute shader. (Out-of-bound writes results in no-ops) Creating the Compute shader components Input resources First I create an ID3D11Buffer* for input data. This is passed as a resource when creating the SRV used

DirectX11 Shader Compilation Issue

孤人 提交于 2020-01-07 09:21:13
问题 I'm working on a simple DirectX application to display couple of triangles together as Tetrahedron,which Keeps crashing at start.I checked with VS2012 Debugger the error occurs at the stage where Shader is supposed to be compiled from a .fx file,So I assume it's got something to do with the shader.I have no idea what I did wrong.Below is the code of the Shader I'm Using.Assistance required. struct Light { float3 pos; float4 ambient; float4 diffuse; }; cbuffer cbPerFrame { Light light; };

Declaring 2d array in XNA

家住魔仙堡 提交于 2020-01-06 14:00:11
问题 6 images 800x640 need to be rendered from a point of view, and based on these images one new image 800x640, a sort of fish-eye view, should be created. At the moment the application draw each image and read the colors by calling this RenderTarget2D.GetData method and based on each pixel position it calculates new position for that final image. It works fine. but since calling RenderTarget2D.GetData reduces the performance I tried to declare a 800x640 array in shader and then render 6 images

Shader for counting number of pixels

谁说我不能喝 提交于 2020-01-05 02:34:12
问题 I'm looking for a shader CG or HLSL, that can count number of red pixels or any other colors that I want. 回答1: You could do this with atomic counters in a fragment shader. Just test the output color to see if it's within a certain tolerance of red, and if so, increment the counter. After the draw call you should be able to read the counter's value on the CPU and do whatever you like with it. edit: added a very simple example fragment shader: // Atomic counters require 4.2 or higher according

Shader - Simple SSS lighting issue

夙愿已清 提交于 2020-01-02 09:28:07
问题 I am trying to create a simple subsurface scattering effect using a shader but I am facing a small issue. Look at those screenshots. The three images represents three lighting states (above surface, really close to surface, subsurface) with various lighting colors (red and blue) and always the same subsurface color (red). As you might notice when the light is above the surface and really close to this surface its influence appears to minimize which is the expected behavior. But the problem is

How can I use VS2012's automatic HLSL compiling in a C# project?

前提是你 提交于 2020-01-02 05:06:11
问题 http://blogs.msdn.com/b/chuckw/archive/2012/05/07/hlsl-fxc-and-d3dcompile.aspx The above link states that "Note: This automatic integration only works for C++ projects, not C# projects.". I'm using SlimDX and surely there's a way to make it so Visual Studio will compile HLSL shaders at build time in C# projects? 回答1: I think you will find this might do it for you. You must have the fxc.exe directx compiler in the bin folder, it can be found in the direct x sdk. This class provides an