pixel-shader

Handling alpha channel in WPF pixel shader effect

一曲冷凌霜 提交于 2020-12-29 05:53:27
问题 Is there something unusual about how the alpha component is handled in a pixel shader? I have a WPF application for which my artist is giving me grayscale images to use as backgrounds, and the application colorizes those images according to the current state. So I wrote a pixel shader (using the WPF Pixel Shader Effects Library infrastructure) to use as an effect on an Image element. The shader takes a color as a parameter, which it converts to HSL so it can manipulate brightness. Then for

Calculating world space coordinates in the pixel shader

孤人 提交于 2020-03-21 19:25:22
问题 I have a pixel shader and I want to calculate the position of each pixel in terms of my world space coordinates. How would I do this? What would I need? I have a ps_input structure which has a float4 position : SV_POSITION . I'm assuming this is important, but the values stored inside seems to be kind of funny. I can't seem to figure out what they relate to. For instance, if a pixel is 2d, how come it has a w component, or a z component for that matter? I'm using DirectX and the pixel shader

Color conversion from DXGI_FORMAT_B8G8R8A8_UNORM to NV12 in GPU using DirectX11 pixel shaders

陌路散爱 提交于 2020-01-29 04:51:06
问题 I'm working on a code to capture the desktop using Desktop duplication and encode the same to h264 using Intel hardwareMFT. The encoder only accepts NV12 format as input. I have got a DXGI_FORMAT_B8G8R8A8_UNORM to NV12 converter(https://github.com/NVIDIA/video-sdk-samples/blob/master/nvEncDXGIOutputDuplicationSample/Preproc.cpp) that works fine, and is based on DirectX VideoProcessor. The problem is that the VideoProcessor on certain intel graphics hardware supports conversions only from DXGI

Using floor() function in GLSL when sampling a texture leaves glitch

删除回忆录丶 提交于 2020-01-04 04:04:07
问题 Here's a shadertoy example of the issue I'm seeing: https://www.shadertoy.com/view/4dVGzW I'm sampling a texture by sampling from floor-ed texture coordinates: #define GRID_SIZE 20.0 void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec2 uv = fragCoord.xy / iResolution.xy; // Sample texture at integral positions vec2 texUV = floor(uv * GRID_SIZE) / GRID_SIZE; fragColor = texture2D(iChannel0, texUV).rgba; } The bug I'm seeing is that there are 1-2 pixel lines sometimes drawn between

WebGL/GLSL - How does a ShaderToy work?

◇◆丶佛笑我妖孽 提交于 2019-12-29 10:09:11
问题 I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular. From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles. However, when I look at the

WebGL/GLSL - How does a ShaderToy work?

萝らか妹 提交于 2019-12-29 10:09:08
问题 I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular. From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles. However, when I look at the

Support more than one color input in my Pixel Shader (UWP, Win2D)

北慕城南 提交于 2019-12-24 05:33:07
问题 I've been working on an app that can provide Color Replacement, and had a lot of help from @Jet Chopper on a solution. He's provided me the following code which essentially uses a ControlSpectrum control for Source and Target colors. The idea is you specify a Source Color which then gets replaced by a Target color. Here's the current working code: This is my original post that contains the original solution with a GIF. Original Post XAML: <Grid> <xaml:CanvasAnimatedControl x:Name=

HLSL Shader to Subtract Background Image

ⅰ亾dé卋堺 提交于 2019-12-24 03:58:08
问题 I am trying to get an HLSL Pixel Shader for Silverlight to work to subtract the background image from a video image. Can anyone suggest a more sophisticated algorithm than I am using because my algorithm isn't doing it correctly? float Tolerance : register(C1); SamplerState ImageSampler : register(S0); SamplerState BackgroundSampler : register(S1); struct VS_INPUT { float4 Position : POSITION; float4 Diffuse : COLOR0; float2 UV0 : TEXCOORD0; float2 UV1 : TEXCOORD1; }; struct VS_OUTPUT {

Floyd–Steinberg dithering alternatives for pixel shader

左心房为你撑大大i 提交于 2019-12-22 03:56:10
问题 I know that Floyd–Steinberg dithering algorithm can't be implemented with pixel shader, because that algorithm is strictly sequential. But maybe there exist some higly parallel dithering algorithm which by it's visual output is similar to Floyd-Steinberg algorithm ? So the question is - What are dithering algorithms which are suitable to implement on pixel shader (preferably GLSL) and with output quality (very) similar to Floyd-Steinberg dithering ? BTW. Multi-pass algorithms are allowed