ms-media-foundation

How to properly use a hardware accelerated Media Foundation Source Reader to decode a video?

萝らか妹 提交于 2019-12-03 15:17:31
I'm in the process of writing a hardware accelerated h264 decoder using Media Foundation's Source Reader, but have encountered a problem. I followed this tutorial and supported myself with Windows SDK Media Foundation samples. My app seems to work fine when hardware acceleration is turned off, but it doesn't provide the performance I need. When I turn the acceleration on by passing a IMFDXGIDeviceManager to IMFAttributes used to create the reader, things get complicated. If I create the ID3D11Device using a D3D_DRIVER_TYPE_NULL driver, the app works fine and the frames are processed faster

Can TopoEdit Be used to load a topology for a Session created by my application?

让人想犯罪 __ 提交于 2019-12-03 13:55:39
问题 I would like to be able to explore the Topologies created by my application in TopoEdit. In DirectShow Development you can use GraphEdit, and if you register a graph created by your software in the global Running Object Table using the base class AddGraphToRot function, you can then load it in GraphEdit. Is there any way to do the same in TopoEdit? 回答1: DirectShow GraphEdit's ability to connect to remote COM object is based on availability of proxy/stub pairs for DirectShow interfaces and set

DXGI Desktop Duplication: encoding frames to send them over the network

…衆ロ難τιáo~ 提交于 2019-12-03 09:59:08
问题 I'm trying to write an app which will capture a video stream of the screen and send it to a remote client. I've found out that the best way to capture a screen on Windows is to use DXGI Desktop Duplication API (available since Windows 8). Microsoft provides a neat sample which streams duplicated frames to screen. Now, I've been wondering what is the easiest, but still relatively fast way to encode those frames and send them over the network. The frames come from AcquireNextFrame with a

What is the status of Microsoft Media Foundation?

三世轮回 提交于 2019-12-03 03:09:49
问题 Microsoft Media Foundation (MF) was introduced as the successor of DirectShow in Windows Vista. I have mostly ignored it, but it has some features (such as decoding of WMV AC-1 files) which are hard to implement in DirectShow. Media Foundation is also a more modern API so it would seem logical to make the switch. However, the online teaching resources and official documentation seem greatly lacking. There is only 1 book covering the topic (published by Microsoft) and it is no longer available

DXGI Desktop Duplication: encoding frames to send them over the network

为君一笑 提交于 2019-12-03 01:32:59
I'm trying to write an app which will capture a video stream of the screen and send it to a remote client. I've found out that the best way to capture a screen on Windows is to use DXGI Desktop Duplication API (available since Windows 8). Microsoft provides a neat sample which streams duplicated frames to screen. Now, I've been wondering what is the easiest, but still relatively fast way to encode those frames and send them over the network. The frames come from AcquireNextFrame with a surface that contains the desktop bitmap and metadata which contains dirty and move regions that were updated

How do I grab frames from a video stream on Windows 8 modern apps?

十年热恋 提交于 2019-12-02 22:52:45
I am trying to extract images out of a mp4 video stream. After looking stuff up, it seems like the proper way of doing that is using Media Foundations in C++ and open the frame/read stuff out of it. There's very little by way of documentation and samples, but after some digging, it seems like some people have had success in doing this by reading frames into a texture and copying the content of that texture to a memory-readable texture (I am not even sure if I am using the correct terms here). Trying what I found though gives me errors and I am probably doing a bunch of stuff wrong. Here's a

Media Foundation using C instead of C++

风流意气都作罢 提交于 2019-12-02 17:28:51
问题 I am learning to use the Media Foundation API from sample code shown in Microsoft website using C instead of C++. The sample code is shown below. HRESULT CreateVideoCaptureDevice(IMFMediaSource **ppSource) { *ppSource = NULL; UINT32 count = 0; IMFAttributes *pConfig = NULL; IMFActivate **ppDevices = NULL; // Create an attribute store to hold the search criteria. HRESULT hr = MFCreateAttributes(&pConfig, 1); // Request video capture devices. if (SUCCEEDED(hr)) { hr = pConfig->SetGUID( MF

What is the status of Microsoft Media Foundation?

♀尐吖头ヾ 提交于 2019-12-02 16:41:19
Microsoft Media Foundation (MF) was introduced as the successor of DirectShow in Windows Vista. I have mostly ignored it, but it has some features (such as decoding of WMV AC-1 files) which are hard to implement in DirectShow. Media Foundation is also a more modern API so it would seem logical to make the switch. However, the online teaching resources and official documentation seem greatly lacking. There is only 1 book covering the topic (published by Microsoft) and it is no longer available for normal prices. (People charge $500,- or more for second hand versions.) As far as I could find

Custom virtual video capture device

吃可爱长大的小学妹 提交于 2019-12-02 09:23:38
I`m new to media foundation and C++. But I want to create a virtual video capture device which can be used by Microsoft Expression Encoder. Can you tell me in which direction to look? I think it should be something working asynchronously and a source will be byte stream from mobile device. Thanks in advance. Roman R. I don't think you want to look into Media Foundation for this. Expression Encoder uses a richer API to capture video with, DirectShow . You want a virtual DirectShow camera, which was discussed multiple times and has a simple sample project to start from. Virtual webcam input as

Virtual Driver Cam not recognized by browser

浪子不回头ぞ 提交于 2019-12-02 03:24:06
I'm playing with the "Capture Source Filter" from http://tmhare.mvps.org/downloads.htm . After registering the ax driver, I'm trying to understand its compatibility across applications that use video sources. For example, Skype recognize it while browsers (Edge, Chrome) don't. I wonder if it's a limitation of the used approach ( DirectShow filter) or it's just a matter of configuration. The purpose of the question is to understand if that approach is still useful or it's better to move on Media Foundation . I described this here: Applicability of Virtual DirectShow Sources Your virtual camera