gstreamer

Python3 error with Gstreamer

拜拜、爱过 提交于 2019-12-12 04:07:42
问题 I run: raspivid -t 999999 -w 1080 -h 720 -fps 25 -hf -b 2000000 -o - | \gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 \! gdppay ! tcpserversink host=serverIp port=5000 on the Raspberry Pi and run: gst-launch-1.0 -v tcpclientsrc host=serverIp port=5000 \! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false on my computer. and I received the video streamed from Raspberry. now I want write a python code to do so in my computer.my code is : #!

loading same gstreamer elements multiple times in a process

吃可爱长大的小学妹 提交于 2019-12-12 03:36:05
问题 This may be silly question. How gstreamer elements are loaded multiple times in a process?. When Gstreamer elements are created are they shared if already one created and present in memory? In my case, one process creates multiple thread, for each thread I am creating following gstreamer elements, linking and set pipeline to PLAYING state, filesrc->Q->filesink and this works. But when I add gstreamer element(newly written for processing gst buffers data) between Q->filesink all thread stops

Type Find Function Not Called in GStreamer Plugin

谁都会走 提交于 2019-12-12 03:34:26
问题 I have an element that decodes a media type, mytype for example. I want to register the type so that the decodebin element can use my element if needed. I added the code for what I thought would work, but my type_find() function is never called. Any ideas on what I'm doing wrong? Here's what the code looks like: #define MY_CAPS (gst_static_caps_get(&my_caps)) static GstStaticCaps my_caps = GST_STATIC_CAPS("audio/x-mycaps"); static gchar *my_exts[] = { "mtype", NULL }; static void type_find

How to convert I420 frames to BGRA format with gst-launch-1.0?

随声附和 提交于 2019-12-12 03:33:20
问题 I had a raw video file named video.i420 based on I420 format. And I tried to convert it into BGRA format using gst-launch-1.0: gst-launch-1.0 filesrc location=video.i420 ! videoparse width=1920 height=816 format=2 framerate=24/1 ! videoconvert ! videoparse format=12 ! filesink location=video.bgra But the output file video.bgra sized only 48 bytes larger than the source file. Then I played the video.bgra with the followed command: gst-launch-1.0 filesrc location=video.bgra ! videoparse width

GStreamer AAC audio stream delay in iOS

我怕爱的太早我们不能终老 提交于 2019-12-12 03:24:30
问题 I'm playing aac audio stream on my iOS device using GStreamer SDK, its working fine, but delay is above 2.0 seconds. Can I make this delay lower then 2.0 seconds? There may be some buffering issue. This is how I'm creating the pipeline pipeline = gst_parse_launch("playbin2", &error); 回答1: try setting the latency like this: g_object_set(G_OBJECT(pipeline.source), "latency", 250, NULL); 来源: https://stackoverflow.com/questions/32865653/gstreamer-aac-audio-stream-delay-in-ios

gstreamer sample documentation code not running

一曲冷凌霜 提交于 2019-12-12 03:05:14
问题 Trying to compile and run a sample appsrc code after having successfully executed several tutorials. This is a documentation code, supposed it to run but ... The command used to compile gcc appGuideAppSrc.c -o appGuide `pkg-config --cflags --libs gstreamer-0.10 gstreamer-app-0.10` Got the following error after appGuideAppSrc.c: In function ‘cb_need_data’: appGuideAppSrc.c:14:8: warning: assignment makes pointer from integer without a cast [enabled by default] appGuideAppSrc.c:18:25: error:

Segmentation fault with g_object_set / strchr

◇◆丶佛笑我妖孽 提交于 2019-12-12 02:12:27
问题 This line get me a segmentation fault : g_object_set(G_OBJECT(data.udpsrc), "port", 5000, "caps", caps, NULL); where data.udpsrc = gst_element_factory_make("udpsrc", "source"); caps = gst_caps_new_empty_simple("application/x-rtp"); Here's the output with gdb : Program received signal SIGSEGV, Segmentation fault. strchr () at ../ports/sysdeps/arm/armv6/strchr.S:28 28 ../ports/sysdeps/arm/armv6/strchr.S: No such file or directory. (gdb) bt #0 strchr () at ../ports/sysdeps/arm/armv6/strchr.S:28

Combine multiple images with gstreamer

帅比萌擦擦* 提交于 2019-12-12 00:53:47
问题 i want to make some kind of image processing with gstreamer in C , where i read a couples of images then concatinate them all in one big image ( the images in my program are option that the user can take later ) and i don't want to use any external library to do that any sugesstions would be great 回答1: So basically you want to do compositing of the images ie given images A B C D produce this image for instance : ________________ | | | | A | B | |______|_______| | | | | C | D | _______________

gstreamer pipeline with VADER element stalls on PAUSE when used with a tee

限于喜欢 提交于 2019-12-12 00:28:30
问题 I have this pipeline that uses pocketsphinx's VAD element : Gst.parse_launch( "pulsesrc device=\"alsa_input.usb-046d_08c9_674634A4-02-U0x46d0x8c9.analog-mono\" " + "! vader name=vad auto-threshold=true " + "! level name=wavelevel interval=100000000 " + // level interval is in nanoseconds "! wavenc " + "! filesink location=audioz.wav" ); It works fine except that the streaming stops when there is no voice comming in the source. I want to recording to continue independently of the VAD, so I

gstreamer: write both video and audo streams into a single MP4 container after concat

与世无争的帅哥 提交于 2019-12-12 00:05:49
问题 Good day, I have two mp4 files (a.mp4 and b.mp4), each of them includes video and audio streams, and I need to concatenate them into a single mp4 container (c.mp4) using gstreamer (this question is connected to the previous one) In other words, the following pipeline concatenates the content of a.mp4 and b.mp4 and then outputs the result into autovideosink and alsasink: GST_DEBUG=3 gst-launch-1.0 concat name=c2 ! videoconvert ! videorate ! autovideosink concat name=c ! audioconvert !