direct2d

ID3D11Texture2D to ID2D1Bitmap, is it possible?

我怕爱的太早我们不能终老 提交于 2020-05-11 02:59:51
问题 I am working on a extension to a game which only opens a HDC for addon developers to draw on. However, I have exhausted GDI+/Direct2D drawing possibilities that is fast enough for what I want to accomplish - image effects(Additive, Blend, Multiply Blend etc). I am well aware that Direct2D offers a effects toolkit however, that requires a platform update (for Windows 7) and that is not ideal at all. Hence I am left with only Direct3D. MSDN/Google Search offers lots of ways to do D2D -> D3D,

ID3D11Texture2D to ID2D1Bitmap, is it possible?

跟風遠走 提交于 2020-05-11 02:59:14
问题 I am working on a extension to a game which only opens a HDC for addon developers to draw on. However, I have exhausted GDI+/Direct2D drawing possibilities that is fast enough for what I want to accomplish - image effects(Additive, Blend, Multiply Blend etc). I am well aware that Direct2D offers a effects toolkit however, that requires a platform update (for Windows 7) and that is not ideal at all. Hence I am left with only Direct3D. MSDN/Google Search offers lots of ways to do D2D -> D3D,

CLSID_D2D1ChromaKey issues

久未见 提交于 2020-03-24 14:14:21
问题 I try to use DirectX ChromaKey effect, but my function stucks on some step. What I do: Create ID2D1Factory1 Create ID3D11Device and ID3D11DeviceContext Obtain DXGIResource from received texture Obtain shared handle from DXGIResource Open DXGIResource as new ID3D11Texture2D using ID3D11Device Obtain D3D11_TEXTURE2D_DESC of new ID3D11Texture2D Create new ID3D11Texture2D using input ID3D11Texture2D and D3D11_TEXTURE2D_DESC Copy resource from obtained ID3D11Texture2D to created ID3D11Texture2D

How to save bmp file from ID2D1Bitmap

限于喜欢 提交于 2020-01-16 11:10:06
问题 I am trying to create the bmp file from the live running video using Kinect. I am developing an application which is running the live video on top of that to place an image. The IDE which I am used is Visual Studio Professional 2010. The code I am developing in C++ using win32. . I want to save the video along with the overlayed image. Now I am using ID2D1Bitmap for displaying the bitmap in overlayed manner. But I have to retrieve the byte* data from the video with overlayed image. I am

Direct2D bitmap brush elongated

被刻印的时光 ゝ 提交于 2020-01-15 05:51:31
问题 I have to draw my shapes on a offscreen bitmap, but I have a strange problem when I try to render my bitmap. This is how the image should be displayed: And this how I a see the bitmap: Following is the code I use to create the bitmap brush: const auto size = renderTarget->GetSize(); const auto pxSize = D2D1::SizeU(size.width * 4, size.height * 4); ID2D1BitmapRenderTarget* compatibleRenderTarget; HRESULT hr = renderTarget->CreateCompatibleRenderTarget(size, pxSize, &compatibleRenderTarget); if

How to efficiently write pixels to the screen with Direct2D

萝らか妹 提交于 2020-01-15 04:01:05
问题 I have a array of pixels (m_pixels) that I want to render to the screen using Direct2D. The array contains 10,000 elements (100 rows of 100 pixels). The code below loops over the pixels and draws them to the screen as 10x10 rectangles. Is there a more efficient way of performing this operation? How can I add a GaussianBlur effect to the pixels/image? m_d2dContext->BeginDraw(); m_d2dContext->Clear(ColorF(0.0f, 0.0f, 0.0f)); // Render m_pixels // m_pixels is updated by the solver directly

Do I need to recreate Direct2D sharable resources?

社会主义新天地 提交于 2020-01-06 02:39:12
问题 According to the documentation, even device-dependent resources are sharable among render targets when those render targets meet some conditions. Assume I have two RenderTargets(RT1 & RT2) which meet these conditions. I use RT1 to create a new device-dependent resource(ResourceA), and both RT1 & RT2 use this resource to do some drawing. Now when I'm done with RT1, I get D2DERR_RECREATE_TARGET which means I have to recreate RT1 and draw again. My question is, should I recreate the ResourceA

Multithreading in Direct2D

烂漫一生 提交于 2020-01-05 12:10:28
问题 I'm trying to create a simple D2D game engine (it must be able to display and move images in a window, at least), and everything went right until the moment when I decided to switch to the multithreaded version. I read this MSDN article, and it recommends using one multithreaded factory from several threads. But this article claims it would be more effective to have several single-threaded factories (though the article describes server-side rendering scenario, the principle is the same for my

Unresolved external symbol

别来无恙 提交于 2020-01-03 17:36:51
问题 Main article there is a header file and a source file. After copying those two files and adding few headers: #include <Windows.h> #include <d2d1.h> #pragma comment(lib, "d2d1") #include <dwrite.h> #include <d2d1helper.h> #include "SafeRelease.h" //Safe realease file template<class Interface> inline void SafeRelease( Interface **ppInterfaceToRelease ) { if (*ppInterfaceToRelease != NULL) { (*ppInterfaceToRelease)->Release(); (*ppInterfaceToRelease) = NULL; } } when I'm trying to compile this

Smooth window resizing in Windows (using Direct2D 1.1)?

与世无争的帅哥 提交于 2019-12-29 06:45:07
问题 It annoys me that the resizing of windows in Windows is not as "smooth" as it I'd like it to be (this is the case with Windows programs in general, not just my own. Visual Studio is a good example). It makes the OS and its programs feel "flimsy" and "cheap" (yes, I care about how programs and user interfaces feel , in the same way I care about the sound and feel of closing a car door. It's a reflection of build quality), which in my view affects the overall UX and ultimately the perception of