gstreamer

Gstreamer: how to link decodebin to encodebin? (error: failed delayed linking some pad of …)

我的梦境 提交于 2020-07-23 05:20:19
问题 Naively, I am trying to link decodebin to encodebin: $ gst-launch-1.0 filesrc location="/tmp/sound.wav" ! decodebin ! encodebin profile="application/ogg:video/x-theora:audio/x-vorbis" ! filesink location="/tmp/sound.ogg" Setting pipeline to PAUSED ... Pipeline is PREROLLING ... WARNING: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0: Delayed linking failed. Additional debug info: ./grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0: failed

Gstreamer: how to link decodebin to encodebin? (error: failed delayed linking some pad of …)

☆樱花仙子☆ 提交于 2020-07-23 05:20:07
问题 Naively, I am trying to link decodebin to encodebin: $ gst-launch-1.0 filesrc location="/tmp/sound.wav" ! decodebin ! encodebin profile="application/ogg:video/x-theora:audio/x-vorbis" ! filesink location="/tmp/sound.ogg" Setting pipeline to PAUSED ... Pipeline is PREROLLING ... WARNING: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0: Delayed linking failed. Additional debug info: ./grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0: failed

Gstreamer: how to link decodebin to encodebin? (error: failed delayed linking some pad of …)

人走茶凉 提交于 2020-07-23 05:17:16
问题 Naively, I am trying to link decodebin to encodebin: $ gst-launch-1.0 filesrc location="/tmp/sound.wav" ! decodebin ! encodebin profile="application/ogg:video/x-theora:audio/x-vorbis" ! filesink location="/tmp/sound.ogg" Setting pipeline to PAUSED ... Pipeline is PREROLLING ... WARNING: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0: Delayed linking failed. Additional debug info: ./grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0: failed

Cannot load python gstreamer elements

寵の児 提交于 2020-06-25 22:06:31
问题 I'm following a guide at https://mathieuduponchelle.github.io/2018-02-01-Python-Elements.html?gi-language=undefined to create a sample gstreamer element in Python. However, I can't get GStreamer to load it. I've been fiddling with the GST_PLUGIN_PATH but I can't get my python files to be found. I can get GStreamer to find compiled .so elements, but the python elements seem to evade the plugin loader. I've installed gstreamer1.0, pygobject, and gst-python to the best of my ability onto Debian

How to use GStreamer to stream from IP RTMP Camera to v4l2loopback Camera?

别说谁变了你拦得住时间么 提交于 2020-06-17 13:04:58
问题 I am trying to use GStreamer to connect RTMP/RTSP stream to a v4l2loopback Virtual Device. Works 1 - RTMP to AutoVideoSink sudo gst-launch-1.0 rtspsrc location=rtsp://192.168.xxx.xxx/live/av0 ! decodebin ! autovideosink sudo gst-launch-1.0 rtmpsrc location=rtmp://192.168.xxx.xxx/live/av0 ! decodebin ! autovideosink Works 2 - TestSrc to Dummy Video5 sudo gst-launch-1.0 videotestsrc ! v4l2sink device=/dev/video5 Does not work - RTMP to Dummy Video5 – No error but does not show the video sudo

interleaving 4 channels of audio into vorbisenc or opusenc in gstreamer

安稳与你 提交于 2020-05-17 04:14:27
问题 I’m trying to interleave 4 channels of audio into one audio file I have managed to successfully save them into wav with wavenc gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_0 filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_1 filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE

interleaving 4 channels of audio into vorbisenc or opusenc in gstreamer

大城市里の小女人 提交于 2020-05-17 04:08:07
问题 I’m trying to interleave 4 channels of audio into one audio file I have managed to successfully save them into wav with wavenc gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_0 filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_1 filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE

GStreamer. Probe after rtph265pay never called

淺唱寂寞╮ 提交于 2020-05-17 03:25:09
问题 i have rtsp server and i want to extend rtp buffer header. For this purpose i added probe to src of rtph265pay, but it never called. My pipeline: ( appsrc name=vsrc ! nvvidconv ! video/x-raw(memory:NVMM),format=NV12 ! omxh265enc MeasureEncoderLatency=true bitrate=20000000 control-rate=2 ! rtph265pay name=pay0 pt=96 ) Code where i attach probe: static GstPadProbeReturn test_probe (GstPad *pad, GstPadProbeInfo *info, gpointer user_data) { cout << "i'm here"; } void mediaConfigure

Where are Gstreamer bus log messages?

落花浮王杯 提交于 2020-05-16 05:53:29
问题 I am trying to stream a .mp4 to a RTSP server using Gstreamer in python import sys import gi gi.require_version('Gst', '1.0') gi.require_version('GstRtspServer', '1.0') gi.require_version('GstRtsp', '1.0') from gi.repository import Gst, GstRtspServer, GObject, GLib, GstRtsp loop = GLib.MainLoop() Gst.init(None) file_path = "test.mp4" class TestRtspMediaFactory(GstRtspServer.RTSPMediaFactory): def __init__(self): GstRtspServer.RTSPMediaFactory.__init__(self) def do_create_element(self, url):

详解百度大脑EdgeBoard出色的视频处理技术

青春壹個敷衍的年華 提交于 2020-05-07 20:01:25
背景介绍 视频处理是人工智能应用中的一个重要方向,对于一款端上部署的 AI 加速产品,其视频接入能力是产品技术实力的重要体现,同时也直接影响用户的产品体验。端上的嵌入式设备,受制于本身的 CPU 等硬件资源,同时支持多种视频设备及多种视频格式是极大的挑战。 EdgeBoard 是百度基于 FPGA 打造的嵌入式 AI 解决方案,能够提供强大的算力,并支持定制化模型,适配各种不同的场景,并大幅提高设备的 AI 推理能力,具有高性能、高通用、易集成等特点。EdgeBoard 作为一款面向各个层次的 AI 开发者的硬件平台,兼顾了多种视频输入,包括 MIPI、BT1120、USB 摄像头、IPC(IP Camera,网络摄像头)、GigE 工业摄像头,这极大地体现出 EdgeBoard 在视频处理上出色的技术实力,而同时支持如此多的视频接入设备在一般的 AI 端产品上也是不常见的。 本文将详细介绍 EdgeBoard 上的视频处理方案,如何兼顾效率和通用性,在二者之间取得平衡,最大程度上满足用户的需求。 Linux V 4L2 结构 内核 V4 L2 模块 Linux 系统发展至今,以其优越的跨平台特性和扩展移植性在嵌入式操作系统领域占据很大的份额,EdgeBoard 采用的是 Xilinx PetaLinux 工具打造的 Linux 内核,操作系统采用的是 RootFS,如果用户需要