directshow

Get iplImage or Mat from directshow to opencv

放肆的年华 提交于 2019-12-04 16:59:24
I had to change to directshow for my eyetracking software due to the difficulties to change resolution of the camera when using c++ and opencv. Directshow is new to me and it is kind of hard to understand everything. But I found this nice example that works perfectly for capturing & viewing the web cam. http://www.codeproject.com/Articles/12869/Real-time-video-image-processing-frame-grabber-usi I am using the version that not requires directShow SDK. (But it is still directshow that is used in the example, right??) #include <windows.h> #include <dshow.h> #pragma comment(lib,"Strmiids.lib")

decode MPEG1/2 video with Media Foundation

℡╲_俬逩灬. 提交于 2019-12-04 16:45:41
I am using Media Foundation to play videos. On windows 7 some videos encoded with Mpeg1/Mpeg2 PS and Windows Media Player can play them, but Media Foundation does not(I tried to register a stub MPEG1Source but it still does not work). I noticed some of these files could be played with directshow but not Media foundation(can not be open at all). I guess "media player" does not use Media Foundation only? Is it possible to use Media Foundation to play these file? If it is not, how does Media player work? Thanks a lot. P.S. I have read the windows SDK and I registered a "fake" mpeg1 decode and it

How to capture live camera frames in RGB with DirectShow

可紊 提交于 2019-12-04 14:51:36
问题 I'm implementing live video capture through DirectShow for live processing and display. (Augmented Reality app). I can access the pixels easily enough, but it seems I can't get the SampleGrabber to provide RGB data. The device (an iSight -- running VC++ Express in VMWare) only reports MEDIASUBTYPE_YUY2. After extensive Googling, I still can't figure out whether DirectShow is supposed to provide built-in color space conversion for this sort of thing. Some sites report that there is no YUV<-

How can a filter implementing IStream know when it won't receive any further IStream commands

感情迁移 提交于 2019-12-04 14:36:33
I've written a filter that implements IStream (the COM interface not the C++ standard library class). That's all working well but my problem is that I'm not sure when (if ever) I can be sure than no further IStream commands will be sent so the stream behind IStream can be closed. The simplest place to close the stream would be in Stop() on my filter but this is too early. According to MSDN docs, the filter graph manager will call Stop() on filters in the graph in upstream order so my filter will get Stopped before an upstream mux filter which typically will use IStream to do any end of

Custom File Format And Codec?

走远了吗. 提交于 2019-12-04 14:08:42
I'm messed up with codec issue for days, and still couldn't see the big picture yet. It is my first time to deal with audio/video formats and codecs. So I really need some help about that. Here is the work. I'm writing several components that is responsible for encoding and decoding customized mpeg files. On top of standart de/compression process (for both audio and video) i will implement some custom de/encryption. Writing both codec and software libraries for this. Things i can't figure out are listed. For WMP, what is the codec locating policy. How do I differantiate my custom file format

Can't change video capture resolution using c#

纵然是瞬间 提交于 2019-12-04 12:36:49
I am trying to change the default webcam resolution using DirectShowNet in C#, from what I gather I need to change it from calling the built in VideoInfoHeader class in windows win32 api dll for avi capture. I have the following code from DirectShowNet: hr = capGraph.SetFiltergraph( graphBuilder ); if( hr < 0 ) Marshal.ThrowExceptionForHR( hr ); AMMediaType media = new AMMediaType(); media.majorType = MediaType.Video; media.subType = MediaSubType.RGB24; media.formatType = FormatType.VideoInfo; // ??? hr = sampGrabber.SetMediaType(media); if (hr < 0) Marshal.ThrowExceptionForHR(hr); hr =

Is there any example to show how to write a DirectShow transform filter?

痞子三分冷 提交于 2019-12-04 12:32:27
I want to capture a current frame and its previous one to do analysis and produce a new frame to show. Is it to say I must write a transform DirectShow filter? But I am a newbie to DirectShow. I was confused by MSDN's lots of documents. So I wonder if there is any simple example to show how to do it. Thanks. Cook Goz In the directshow samples that come with the Platform SDK you, at least, always USED to get examples on how to make all sorts of filters. I can't believe they would have removed that. It made DirectShow almost usable :) This may help: Writing Transform Filters EZRGB24 Filter

Using DirectShow to capture frames and OpenCV to Process

白昼怎懂夜的黑 提交于 2019-12-04 11:52:50
问题 I have made two different solutions for Video-to-Image Capturing and was wondering if I could intertwine the best of both worlds. I am currently using DirectShow to load in an AVI file and capture images. However, DirectShow's lack of image processing capabilities and the need to make additional filters have stopped me dead in my tracks. I then turned to OpenCV. It has all the image processing functions I need, but it has trouble loading in the videos that the DirectShow solution was able to

DirectShow - passing parameters to custom source push filter

泪湿孤枕 提交于 2019-12-04 09:50:15
I'm working on a solution that will be used to receive video stream from remote hosts and to put various texts on the top of it. Currently it consists of custom DirectShow push filter (C++) which receives data from remote hosts using RTP protocol and tiny C# application that sets up the DirectShow graph and is used as a container for the video. I'm using DirectShowLib interop library. However, I'm not sure how to pass parameters from this C# app to my custom filter. What are possible ways to do it? the simplest way is to register your own protocol (create a key myproto under HKCR, and then

Audio Sync problems using DirectShow.NET

若如初见. 提交于 2019-12-04 09:24:28
I have started a thread on this at DirectShow.NET's forum, here is the link http://sourceforge.net/projects/directshownet/forums/forum/460697/topic/5194414/index/page/1 but unfortunately the problem still persists... I have an application that captures video from a webcam and audio from the microphone and save it to a file, for some reason the audio and video are never in-sync, i tried the following: 1. Started with ffdshow encoder and changed to AVI Mux - problem persists, audio is delayed and at the end of the video the picture remains frozen and the audio continues 2. Changed from AVI Mux