ms-media-foundation

Intel graphics hardware H264 MFT ProcessInput call fails after feeding few input samples, the same works fine with Nvidia hardware MFT

为君一笑 提交于 2019-12-17 19:23:19
问题 I'm capturing the desktop using DesktopDuplication API and converting the samples from RGBA to NV12 in GPU and feeding the same to MediaFoundation hardware H264 MFT. This works fine with Nvidia graphics, and also with software encoders but fails when only intel graphics hardware MFT is available. The code works fine on the same intel graphics machine if I fallback to Software MFT. I also have ensured that the encoding is actually done in hardware on Nvidia graphics machines. On Intel graphics

Getting green screen in ffplay: Streaming desktop (DirectX surface) as H264 video over RTP stream using Live555

青春壹個敷衍的年華 提交于 2019-12-17 18:56:20
问题 I'm trying to stream the desktop(DirectX surface in NV12 format) as H264 video over RTP stream using Live555 & Windows media foundation's hardware encoder on Windows10, and expecting it to be rendered by ffplay (ffmpeg 4.2). But only getting a green screen like below, I referred MFWebCamToRTP mediafoundation-sample & Encoding DirectX surface using hardware MFT for implementing live555's FramedSource and changing the input source to DirectX surface instead of webCam. Here is an excerpt of my

Should I release the returned IMFSample of an internally allocated MFT output buffer?

自古美人都是妖i 提交于 2019-12-13 16:23:12
问题 Media Foundation Transform objects (MFT) may implement an output buffer allocation model where the buffers are allocated internally by the MFT object. If this is the case, the internally allocated buffer is returned via the pSample member of an MFT_OUTPUT_DATA_BUFFER structure that is passed to the IMFTransform::ProcessOutput() method. From the MFT_OUTPUT_DATA_BUFFER structure documentation: typedef struct _MFT_OUTPUT_DATA_BUFFER { DWORD dwStreamID; IMFSample *pSample; DWORD dwStatus;

Media Foundation with C#

时光总嘲笑我的痴心妄想 提交于 2019-12-13 12:28:17
问题 Media foundation is the recommended technology by Microsoft and it has really supported it by a lot of samples and explanation but all are with the native code I found a wrapper in sourceforge make me able to use Media Foundation with C# but while I am reading people talk about not every thing can be done by the managed code I have some tasks to do with MF: Capture alive video change resolution and baud rate Encode video decode video Can I use .net in these tasks or I have to use the native

IMFTransform interface of Color Converter DSP giving E_INVALIDARG on SetInputType/SetOutputType

£可爱£侵袭症+ 提交于 2019-12-13 12:12:32
问题 I'm trying to use Color Converter DMO (http://msdn.microsoft.com/en-us/library/windows/desktop/ff819079(v=vs.85).aspx) to convert RBG24 to YV12/NV12 via Media Foundation. I've created an instance of Color Converter DSP via CLSID_CColorConvertDMO and then tried to set the needed input/output types, but the calls always return E_INVALIDARG even when using media types that are returned by GetOutputAvailableType and GetInputAvailableType . If I set the media type to NULL then i get the error that

Video playback detection with Win32 API?

一曲冷凌霜 提交于 2019-12-13 06:15:33
问题 is there a way to detect if video playback is running or not with Win32 API? For audio, I can detect playback processes by: enumerate playback devices with MMDeviceEnumerator and, for each device, enumerate sessions with IaudiosessionManager. I'd like to do the similar thing for video playback. Ideally, a method that works for any application, but if it is impossible, a method that works for specific framework (DirectShow, Media Foundation, etc.) is ok. Thanks. 来源: https://stackoverflow.com

Painting frames while media session is paused

六眼飞鱼酱① 提交于 2019-12-13 05:16:21
问题 I'm working on a custom video player using the Media Foundation framework. Currently, I can play, pause, stop or change the rate of the playback using an IMFMediaSession. I can also retrieve a single frame using an IMFSourceReader. I am currently able to render a frame (IMFSample) to a window area (a HWND) but only when the media session is stopped. My goal is to be able to render a frame while the media session is paused. (= doing frame-stepping using a source reader and not the media

How to convert RTP H.264 payload into playable file using Media Foundation

牧云@^-^@ 提交于 2019-12-13 05:02:11
问题 I need a way to be able to make a video file from H.264 RTPFrames (Payload Type 96) that I receive using the Managed Media Aggregation - https://net7mma.codeplex.com/. I am trying to use media foundation in managed code. I saw http://mfnet.sourceforge.net/ but I couldn't find how to do it. I saw that someone said in some forum that it is better to use the Media Foundation dlls in C# managed code. Does anybody have any experience working with this? EDIT: I an trying to use the VLCDotNet to put

Why MFTEnumEx() corrupts the stack?

。_饼干妹妹 提交于 2019-12-13 04:27:17
问题 Down below you can see some dummy code for enumerating available multiplexers. On my system there is only one mux available (as expected). When I call the MFTEnumEx(), the function succeeds, but stack gets corrupted. That's why I added that 64k buffer. 16 bytes will be written at offset 16. I tried this code on two different machines with the same result (Windows 10). Can somebody explain this? BYTE buff[ 65536 ]; HRESULT hr; hr = CoInitialize( NULL ); ATLASSERT( SUCCEEDED( hr ) ); hr =

Does DirectShow allow one to decode virtually any video based on installed codecs?

倾然丶 夕夏残阳落幕 提交于 2019-12-13 03:29:18
问题 I am comparing VFW, MediaFoundation, and DirectShow.. Although VFW is very old and dated, it at least allows a lot of flexibility in encoding and decoding videos because you can choose virtually any encoder/decoder, AFAIK, and you are not limited to a subset of decoders/encoders that only microsoft has chosen. Does DirectShow offer the ability to decode (decompress) multiple video kinds (like vfw) using any chosen codec, or must you only use a subset that microsoft has chosen? Indeed some api