directx-11

DirectX Image texture quad displays underlying controls color where it is transparent

柔情痞子 提交于 2019-12-22 01:08:17
问题 I'm trying to draw a texture on texture as show in the image below. Yellow circle Image: Green circle Image: As shown in the above image of penguins, i'm trying to render another image as texture which is shown by green and yellow circles. The image is transparent where purple is shown. Purple is color of the underlying control on which the texture is drawn. The order of rendering is: 1. render penguins 2. Render green circle 3. render yellow circle 4. render green circle Now I'm not sure as

DirectX 11 compute shader for ray/mesh intersect

被刻印的时光 ゝ 提交于 2019-12-21 21:34:15
问题 I recently converted a DirectX 9 application that was using D3DXIntersect to find ray/mesh intersections to DirectX 11. Since D3DXIntersect is not available in DX11, I wrote my own code to find the intersection, which just loops over all the triangles in the mesh and tests them, keeping track of the closest hit to the origin. This is done on the CPU side and works fine for picking via the GUI, but I have another part of the application that creates a new mesh from an existing one based on

Directx 11 Front Buffer

时光毁灭记忆、已成空白 提交于 2019-12-21 20:55:49
问题 I am hoping this is a easy answer to an easy question which I cannot find an answer to. How do I access the front buffer in Directx 11 / DXGI? I have found in Directx 9 you can use GetFrontBufferData() and you can use GetBuffer() in Directx 11 to get access to the backbuffer but there are problems with this. The backbuffer doesn't have calculations done to it that the front buffer does. So I was wondering if there is something I am missing. I could try using GetDisplaySurfaceData and unless I

DirectX: World to view matrix - where is my misconception?

拜拜、爱过 提交于 2019-12-21 20:32:34
问题 I'm starting with DirectX (and SharpDX, therefore programming only in C#/hlsl) and am trying to build my own camera class. It should be rotatable, allow forward and backward moving and also "sideways" movement (the classical first person movement often mapped to A and D, plus up and down in my case). For easier bugfixing model and world space are the same in my case, perspective projection is not yet implemented, as is rotating the camera, and my camera is supposed to look in the positive Z

How to enable Hardware Percentage Closer Filtering?

夙愿已清 提交于 2019-12-21 20:23:06
问题 I am trying to implement PCF filtering to my shaow maps, and so modified the GPU Gems article ( http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html ) so that it could be run on current shaders. The modification includes replacing tex2Dproj function with Texture2D.Sample() function so it accepts Sampler States which are created in DirectX11. Then I compared the offset values with a normal shadow map comparison: float2 ShTex; ShTex.x = PSIn.ShadowMapSamplingPos.x/PSIn

Are there DirectX guidelines for binding and unbinding resources between draw calls?

我怕爱的太早我们不能终老 提交于 2019-12-21 05:21:35
问题 All DirectX books and tutorials strongly recommend reducing resource allocations between draw calls to a minimum – yet I can’t find any guidelines that get more into details. Reviewing a lot of sample code found in the web, I have concluded that programmers have completely different coding principles regarding this subject. Some even set and unset VS/PS VS/PS ResourceViews RasterizerStage DepthStencilState PrimitiveTopology ... before and after every draw call (although the setup remains

Typical rendering strategy for many and varied complex objects in directx?

点点圈 提交于 2019-12-21 04:07:14
问题 I am learning directx. It provides a huge amount of freedom in how to do things, but presumably different stategies perform differently and it provides little guidance as to what well performing usage patterns might be. When using directx is it typical to have to swap in a bunch of new data multiple times on each render? The most obvious, and probably really inefficient, way to use it would be like this. Stragety 1 On every single render Load everything for model 0 (textures included) and

SharpDX 2.5 in DirectX11 in WPF

人盡茶涼 提交于 2019-12-21 03:44:19
问题 I'm trying to implement DirectX 11 using SharpDX 2.5 into WPF. Sadly http://directx4wpf.codeplex.com/ and http://sharpdxwpf.codeplex.com/ don't work properly with SharpDX 2.5. I was also not able to port the WPFHost DX10 sample to DX11 and the full code package of this example is down: http://www.indiedev.de/wiki/DirectX_in_WPF_integrieren Can someone suggest another way of implementing? 回答1: SharpDX supports WPF via SharpDXElement. Take a look in the Samples repository at the Toolkit.sln -

Rendering to a full 3D Render Target in one pass

寵の児 提交于 2019-12-21 02:26:26
问题 Using DirectX 11, I created a 3D volume texture that can be bound as a render target: D3D11_TEXTURE3D_DESC texDesc3d; // ... texDesc3d.Usage = D3D11_USAGE_DEFAULT; texDesc3d.BindFlags = D3D11_BIND_RENDER_TARGET; // Create volume texture and views m_dxDevice->CreateTexture3D(&texDesc3d, nullptr, &m_tex3d); m_dxDevice->CreateRenderTargetView(m_tex3d, nullptr, &m_tex3dRTView); I would now like to update the whole render target and fill it with procedural data generated in a pixel shader, similar

Using Media Foundation to encode Direct X surfaces

只愿长相守 提交于 2019-12-20 11:32:12
问题 I'm trying to use the MediaFoundation API to encode a video but I'm having problems pushing the samples to the SinkWriter. I'm getting the frames to encode through the Desktop Duplication API. What I end up with is an ID3D11Texture2D with the desktop image in it. I'm trying to create an IMFVideoSample containing this surface and then push that video sample to a SinkWriter. I've tried going about this in different ways: I called MFCreateVideoSampleFromSurface(texture, &pSample) where texture