gstreamer

Gstreamer rtsp playing (with sound)

随声附和 提交于 2019-12-08 02:23:24
问题 im newbie in gstreamer and simple try to wath rtsp video flow from Dlink 2103 camera. When i trying it (just video): gst-launch rtspsrc location=rtsp://192.168.0.20/live1.sdp ! \ rtph264depay ! \ h264parse ! capsfilter caps="video/x-h264,width=1280,height=800,framerate=(fraction)25/1" ! ffdec_h264 ! ffmpegcolorspace ! autovideosink Its ok. When i trying it (just audio): gst-launch rtspsrc location=rtsp://192.168.0.20/live1.sdp ! \ rtpg726depay ! ffdec_g726 ! audioconvert ! audioresample !

GStreamer C++ on Visual Studio 2010?

妖精的绣舞 提交于 2019-12-08 01:59:53
问题 Following instructions on http://docs.gstreamer.com/display/GstSDK/Installing+on+Windows to install GStreamer and compile tutorials/examples on Windows 7, for compilation using Visual Studio 2010. After installing the SDKs, I try to compile the "hello world" example... Cannot open include file: 'gst/gst.h': No such file or directory. Odd - the tutorials were supposedly configured with the paths to these files. Nevertheless, we can manually add them... Add C:\gstreamer-sdk\0.10\x86\include

GStreamermm: creating a new element type (in plugin) by deriving from Gst::Element

僤鯓⒐⒋嵵緔 提交于 2019-12-08 00:40:37
问题 The standard method of creating a new element type in plugin is gobject-style "derivation" from GstElement type with all this gobject magic, like this. I'm writting a project in C++ which use GStreamer with some elements specialized for my purpose. I've written several plugins in the way mentioned above, but I'm not satisfied with the code, since too much of it is just to met gobject requirements. I consider using gstreamermm. Is it possible to create a new type of elements with C++-style

Difference between gst_bus_add_watch() and g_signal_connect()

放肆的年华 提交于 2019-12-08 00:00:03
问题 I'm reading the GStreamer application developer manual, which talks about the pipeline bus in the context of message handling / event handling, they talk about 2 functions: gst_bus_add_watch() and g_signal_connect(). It appears that these 2 functions are interchangeable. The application manual says: Note that if you’re using the default GLib mainloop integration, you can, instead of attaching a watch, connect to the “message” signal on the bus. In page 27. What's the difference between these

Using Gstreamer with Google speech API (Streaming Transcribe) in C++

Deadly 提交于 2019-12-07 13:54:28
I am using the Google Speech API from cloud platform for getting speech-to-text of a streaming audio. I have already done the REST API calls using curl POST requests for a short audio file using GCP. I have seen the documentation of the Google Streaming Recognize, which says "Streaming speech recognition is available via gRPC only." I have gRPC (also protobuf) installed in my OpenSuse Leap 15.0 . Here is the screenshot of the directory. Next I am trying to run the streaming_transcribe example from this link , and I found that the sample program uses a local file as the input but simulate it as

python gstreamer for windows

牧云@^-^@ 提交于 2019-12-07 09:31:45
问题 I want to use Python bindings for GStreamer on windows. But looking at the INSTALL file, the gstreamer does it in the unix way. (make make install) . I don't want to install cygwin or other windowsunix environments. Is there a GPL binary distribution of GStreamer available somewhere? (or a script that can just install it using python setup.py install) thanks UPDATE: I am using Python 2.6 (or higher). The current packages are only available for Python 2.4 or 2.5 回答1: I will answer my own

Gstreamer : transcoding Matroska video to mp4

时间秒杀一切 提交于 2019-12-07 06:25:54
问题 The hardware on which we are working on doesnt support playing of mkv files. So i'm required to transcode Matroska (mkv) video filea to mp4 video file. As I have understood from the material available online on transcoding,I'm required to do the following : separate out different streams of mkv file using matroskademux element. decode the audio and Video streams into raw format using available mkv decoder and supply this data to the mp4 Muxer element and re-encode to required format. Could

How can I speed up a video by dropping frames?

孤街醉人 提交于 2019-12-07 04:53:08
问题 I've got a video that's 30 minutes long. I want to make a speeded up version that's (say) 15 minutes long. I could do this by dropping every 2nd frame. How can I do this on linux? I'm playing with gstreamer and it looks cool. Is there a way to do this with gstreamer? What would be the gst-launch command line to do it? My source video is in Motion JPEG, so I do have the frames to drop. Even if it was using keyframes, there still should be a way to 'double speed' the film? I'd like a command

Gstreamer 与 Ffmpeg Demux区别

不羁岁月 提交于 2019-12-07 04:27:23
对同样的audio format,Gstreamer和Ffmpeg demux出来的audio frame组织形式不一样,最近就碰到了这个问题。 1 对Real Audio,Gstreamer输出单位为Packet(包含多个frame),而Ffmpeg输出单位frame。 2 对OGG,Gstreamer输出vorbis的packet,包括前三个header packets;而Ffmpeg只输出audio packets,通过extradata的方式来传输header packets。 3 对FLAC,Gstreamer先输出metadata block,再输出data block;而Ffmpeg只输出data block。 其它的暂未发现区别。 来源: oschina 链接: https://my.oschina.net/u/347556/blog/92876

Gstreamer官方教程汇总基本教程5---GUI toolkit integration

此生再无相见时 提交于 2019-12-07 04:27:02
目标 本教程介绍了如何整合 GStreamer 到 一个图形用户界面(GUI)工具包像GStreamer的集成 GTK + 。基本上,当 GUI与用户交互时, GStreamer需要 关心 媒体的 播放。最有趣的部分是这两个库的互动:指示GStreamer输出视频到GTK +的窗口和转发用户 的 操作到GStreamer。 特别是,您将了解到: 如何告诉GStreamer将视频 输出 到一个特定的窗口(而不是创建自己的窗口)。 如何根据由GStreamer发送的信息而 不断刷新 用户 图形界面。 如何从GStreamer的多个线程 中 更新 用户 图形界面。 有一种机制来只订阅您感兴趣的信息,而不是所有的信息都被告知,。 介绍 我们将使用 GTK + 工具包 建立一个 媒体播放器,但这些概念也适用于其他工具包,如 QT ,例如。基本的 GTK+ 知识将有助于理解本教程。 主要的一点是告诉GStreamer输出视频到我们选择的窗口。具体的机制依赖于操作系统(或者更确切地说是窗口系统),但GStreamer中提供了一个抽象层,具有平台独立性。这种独立性是通过在 XOverlay 接口,它允许应用程序告诉视频接收器( video sink )一个应该得到渲染的窗口的句柄。 图形对象的接口 GObject的 接口 (其中GStreamer中使用)是一组一个元素可以实现的功能。如果是的话