How to take CPU memory(UCHAR Buffer) in to GPU memory(ID3D11Texture2D Resource)

心已入冬 提交于 2019-12-11 07:09:22

问题


The code here will run in GPU and capture windows screen, it give us ID3D11Texture2D Resource. Using ID3D11DeviceContext::Map I taking GPU resource in to BYTE buffer from BYTEbuffer in to CPU Memory g_iMageBuffer its a UCHAR.

Now I want to do reverse engineering, I want to take g_iMageBufferbuffer(CPU Memory) in to ID3D11Texture2D(GPU memory). Please someone help me how to do this reverse engineering I am new to graphical part.

//Variable Declaration
IDXGIOutputDuplication* IDeskDupl;
IDXGIResource*          lDesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO IFrameInfo;
ID3D11Texture2D*        IAcquiredDesktopImage;
ID3D11Texture2D*        lDestImage;
ID3D11DeviceContext*    lImmediateContext;
UCHAR*                  g_iMageBuffer=nullptr;

//Screen capture start here
hr = lDeskDupl->AcquireNextFrame(20, &lFrameInfo, &lDesktopResource);

// >QueryInterface for ID3D11Texture2D
hr = lDesktopResource->QueryInterface(IID_PPV_ARGS(&lAcquiredDesktopImage));
lDesktopResource.Release();

// Copy image into GDI drawing texture
lImmediateContext->CopyResource(lDestImage,lAcquiredDesktopImage);
lAcquiredDesktopImage.Release();
lDeskDupl->ReleaseFrame();  

// Copy GPU Resource to CPU
D3D11_TEXTURE2D_DESC desc;
lDestImage->GetDesc(&desc);
D3D11_MAPPED_SUBRESOURCE resource;
UINT subresource = D3D11CalcSubresource(0, 0, 0);
lImmediateContext->Map(lDestImage, subresource, D3D11_MAP_READ_WRITE, 0, &resource);

std::unique_ptr<BYTE> pBuf(new BYTE[resource.RowPitch*desc.Height]);
UINT lBmpRowPitch = lOutputDuplDesc.ModeDesc.Width * 4;
BYTE* sptr = reinterpret_cast<BYTE*>(resource.pData);
BYTE* dptr = pBuf.get() + resource.RowPitch*desc.Height - lBmpRowPitch;
UINT lRowPitch = std::min<UINT>(lBmpRowPitch, resource.RowPitch);

for (size_t h = 0; h < lOutputDuplDesc.ModeDesc.Height; ++h)
{
    memcpy_s(dptr, lBmpRowPitch, sptr, lRowPitch);
    sptr += resource.RowPitch;
    dptr -= lBmpRowPitch;
}

lImmediateContext->Unmap(lDestImage, subresource);
long g_captureSize=lRowPitch*desc.Height;
g_iMageBuffer= new UCHAR[g_captureSize];
g_iMageBuffer = (UCHAR*)malloc(g_captureSize);

//Copying to UCHAR buffer 
memcpy(g_iMageBuffer,pBuf,g_captureSize);

回答1:


You don't need reverse engineering. What you describe is called "loading a texture".

How to: Initialize a Texture Programmatically
How to: Initialize a Texture From a File

As you appear to be new to DirectX programming, consider working through the DirectX Tool Kit for DX11 tutorials. In particular, make sure you read the section on ComPtr and ThrowIfFailed.



来源:https://stackoverflow.com/questions/47328286/how-to-take-cpu-memoryuchar-buffer-in-to-gpu-memoryid3d11texture2d-resource

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!