gstreamer

Gstreamer: Link a bin with two sinks to playbin2

末鹿安然 提交于 2019-12-11 13:29:25
问题 I want to read in an SDP file and encode the video stream into h.264 and the audio stream into aac. Then I want to multiplex those streams into an avi stream and then to a file. I don't know the contents of the SDP file ahead of time, so it seems easiest to use playbin2. So, I thought I could do something like this: RawToAviMux Bin ______________________________________ ----- |ghostpad----x264enc / | \ playbin2------ | avimux--filesink \ | / -------| ghostpad----ffenc_aac |___________________

Error in pipeline porting pygst program from gstreamer 0.10 to 1.0

余生颓废 提交于 2019-12-11 13:18:47
问题 I'm porting a program from pygst 0.10 to 1.0 and I've problems with the pipeline. The pipeline I use in the 0.10 version, and works well, is: udpsrc name=src ! tsparse ! tsdemux ! queue ! ffdec_h264 max-threads=0 ! identity ! xvimagesink force-aspect-ratio=True name=video For the 1.0 version the pipeline should be something like: udpsrc name=src ! tsparse ! tsdemux ! queue ! avdec_h264 ! videoconvert ! xvimagesink force-aspect-ratio=True name=video The code is: self.pipeline = Gst.Pipeline()

How can I quantitatively measure gstreamer H264 latency between source and display?

a 夏天 提交于 2019-12-11 12:59:49
问题 I have a project where we are using gstreamer , x264, etc, to multicast a video stream over a local network to multiple receivers (dedicated computers attached to monitors). We're using gstreamer on both the video source (camera) systems and the display monitors. We're using RTP, payload 96, and libx264 to encode the video stream (no audio). But now I need to quantify the latency between (as close as possible to) frame acquisition and display. Does anyone have suggestions that use the

Qt 5.6 + multimedia + gstreamer

橙三吉。 提交于 2019-12-11 12:47:28
问题 Okay here's the deal. I am on Ubuntu 14.04 LTS, have installed Qt 5.6, qtmultimedia5-dev, gstreamer0.10 (and 1.0), and libqtgstreamer-dev. I am STILL getting this error when I try to use a QAudioDecoder: defaultServiceProvider::requestService(): no service found for - "org.qt-project.qt.audiodecode" What am I missing? 回答1: Might be a bit late. I'm having the same problem. Installing gstreamer plugins solved the issue. 来源: https://stackoverflow.com/questions/36465073/qt-5-6-multimedia

Gstreamer with Python and PyQt does not work very well

六月ゝ 毕业季﹏ 提交于 2019-12-11 12:16:45
问题 I have two Raspberry Pi. and want stream video from one to another. for do this I used the following command on first Raspberry Pi to stream video : raspivid -t 999999 -w 1080 -h 720 -fps 25 -hf -b 2000000 -o - | \gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 \! gdppay ! tcpserversink host=serverIp port=5000 in the second Raspberry Pi I used the following pyqt code to capture streamed video : import sys import gi gi.require_version('Gst', '1.0') from gi.repository

Gstreamer: Pipeline is not added and remain nil

六月ゝ 毕业季﹏ 提交于 2019-12-11 12:08:29
问题 I am working on an iOS app which plays multiple rtsp streams at a time. i have integrated latest Gstreamer binaries and followed demo tutorials from Here i added a rtsp URL to test my stream , but it is not working. Gstreamerbackend is not adding pipeline.pipeline is nil. delegate methods are not called When application runs , it shows message in the logs "Creating pipeline" , but nothing happens. shown in screenshot below. 来源: https://stackoverflow.com/questions/27775181/gstreamer-pipeline

gstreamer rtp streaming webcam

試著忘記壹切 提交于 2019-12-11 11:14:42
问题 im trying to stream my webcam using OpenCV and gstreamer... for this first i test using the command line with this: gst-launch v4l2src ! ffmpegcolorspace ! theoraenc ! rtptheorapay ! udpsink host=localhost port=5000 sync=false -v Then i try to see the streaming using this command line: gst-launch udpsrc port=5000 caps="video/x-raw-yuv, format=(fourcc)I420, framerate=(fraction)30/1, width=(int)640, height=(int)480, interlaced=(boolean)false" ! rtptheoradepay ! theoradec ! ximagesink But i get

how can I grab video from usb video capture + dvb device with gstreamer?

坚强是说给别人听的谎言 提交于 2019-12-11 10:55:33
问题 I own a avermedia volar HX usb stick, I want to capture fromthe composite input , but I can't because I'm unable to select the input. I'm using gstreamer with + python, I think I need to use gsttuner select input but I have no experience using gstreamer's interfaces. Could someone post a simple example? Thanks! 回答1: src = gst.element_factory_make("v4l2src", "src") src.set_state(gst.STATE_PAUSED) try: # channel names will be different for each device channels = src.list_channels() composite =

How to solve failing gstreamer assertions in a simple TcpServerSrc to TcpServerSink pipeline

别来无恙 提交于 2019-12-11 10:39:39
问题 I currently have a simple pipeline consisting of a tcpserversrc that relays its input a tcpserversink. But this pipeline repeats the following 4 error messages every g_main_loop iteration. (dmp-server:9726): GStreamer-CRITICAL **: gst_mini_object_ref: assertion 'mini_object != NULL' failed (dmp-server:9726): GStreamer-CRITICAL **: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed (dmp-server:9726): GStreamer-CRITICAL **: gst_structure_has_field: assertion 'structure != NULL'

Force gstreamer appsink buffers to only hold 10ms of data

泄露秘密 提交于 2019-12-11 10:17:10
问题 I have a gstreamer pipeline which drops all of its data into an appsink: command = g_strdup_printf ("autoaudiosrc ! audio/x-raw-int, signed=true, endianness=1234, depth=%d, width=%d, channels=%d, rate=%d !" " appsink name=soundSink max_buffers=2 drop=true ", bitDepthIn, bitDepthIn, channelsIn, sampleRateIn); Which usually looks something like, autoaudiosrc ! audio/x-raw-int, signed=true, endianness=1234, depth=16, width=16, channels=1, rate=16000 ! appsink name=soundSink max_buffers=2 drop