gstreamer

GStreamer - fakesink0:sink) (4096 bytes, dts: none, pts: none, duration: none for first text lines read by filesrc from srt file

拥有回忆 提交于 2019-12-24 18:54:28
问题 Why in following pipeline I have None timestamp at beginning of reading text from .srt file with subtitles? Its a problem for me as I want to mux it later with h264 video from other src, and that fails due to "Buffer has no PTS" from muxer. GStreamer version 1.14.5 gst-launch-1.0 filesrc do-timestamp=true location=English.srt ! queue ! fakesink silent=false -v Setting pipeline to PAUSED ... Pipeline is PREROLLING ... /GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = event *******

Migrating Eclipse android project with NDK and Gstreamer usage to Android Studio

假装没事ソ 提交于 2019-12-24 15:00:21
问题 I have a perfectly working android project in Eclipse, the project includes NDK support and uses Gstreamer. When I migrate the project from Eclipse to Android Studio all sorts of problems pop up, and I just cant successfully compile the project. I did a thorough research on each and every error I've encountered but still couldn't compile and run the project on android studio. https://drive.google.com/file/d/0B_euzgSjTAqcQngwbzR1cXY0MkU/view?usp=sharing A link to the working Eclipse project, I

Save rtsp stream into avi file with gstreamer

孤街浪徒 提交于 2019-12-24 09:48:44
问题 I have a video server which gives me a video & audio streams over rtsp. I can see it using gstreamer tool gst-launch with command: gst-launch-1.0 uridecodebin uri=rtsp://path/to/source ! autovideosink Now I need to store that video stream into file for subsequent playback in any popular video player (VLC, Windows Media Player and so on). I tried simply replace autovideosink with filesink location=file.avi and add -e option like recommended in that answer. File created but I think it's no in

gstreamer with multiple cameras: how can I split the pipeline based on the camera identifier?

↘锁芯ラ 提交于 2019-12-24 08:59:49
问题 I am trying to build a GStreamer pipeline which interleaves images from multiple cameras into a single data flow which can be passed through a neural network and then split into separate branches for sinking. I am successfully using the appsrc plugin and the Basler Pylon 5 - USB 3.0 API to create the interleaved feed. However, before I go through the work to write the neural network GStreamer element, I want to get the splitting working. Currently, I am thinking of tagging the images with an

gst-launch-1.0 videotestsrc ! autovideosink doesn't work (va errors)

那年仲夏 提交于 2019-12-24 07:16:18
问题 I have ubuntu 16.04. I've tried to install gstreamer by this tutorial: https://gstreamer.freedesktop.org/documentation/installing/on-linux.html but it didn't work for me (can not find packages). So i tried to use this: list=$(apt-cache --names-only search ^gstreamer1.0-* | awk '{ print $1 }' | grep -v gstreamer1.0-hybris) sudo apt-get install $list after installing gstreamer I've tested how it works by: gst-launch-1.0 videotestsrc ! autovideosink and got this log: Setting pipeline to PAUSED .

GStreamer-Java: RTSP-Source to UDP-Sink

眉间皱痕 提交于 2019-12-24 07:02:02
问题 I'm currently working on a project to forward (and later transcode) a RTP-Stream from a IP-Webcam to a SIP-User in a videocall. I came up with the following gstreamer pipeline: gst-launch -v rtspsrc location="rtsp://user:pw@ip:554/axis-media/media.amp?videocodec=h264" ! rtph264depay ! rtph264pay ! udpsink sync=false host=xxx.xxx.xx.xx port=xxxx It works very fine. Now I want to create this pipeline using java. This is my code for creating the pipe: Pipeline pipe = new Pipeline("IPCamStream");

GStreamer: status of Python bindings and encoding video with mixed audio

别说谁变了你拦得住时间么 提交于 2019-12-24 05:23:07
问题 I am hoping to find a way to write generated video (non-real time) from Python and mix it with external audio file (MP3) simultaneously. What's the current status of GStreamer Python bindings, are they up-to-date? Would it be possible to write MPEG-4 output with GStreamer and feed raw image frames from Python Is it possible to construct pipeline so that GStreamer would also read MP3 audio and mix it into the container, so that I do not need to reprocess the resulting video track with ffmpeg

Gstreamer video output position tracking and seeking

时间秒杀一切 提交于 2019-12-24 01:43:20
问题 I am using gstreamer (gst-launch) to capture camera and save the stream as both video and image frame. The problem of the pipeline is, when the pipeline finishes (by interrupt) the video record, it can not support position tracking and seeking. Hence, video is played in vlc player with unknown lenght. I think problem is at the pipeline itself. How can we achieve to support position tracking and seeking. Here below you can see the gstreamer pipeline code: gst-launch -v --gst-debug-level=0 \

GStreamer: Add dummy audio track to the received rtp stream

﹥>﹥吖頭↗ 提交于 2019-12-24 00:51:58
问题 I'm initiating RTP stream from my Raspberry camera using: raspivid -n -vf -fl -t 0 -w 640 -h 480 -b 1200000 -fps 20 -pf baseline -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay pt=96 config-interval=10 ! udpsink host=192.168.2.3 port=5000 on the client site, I'm converting it to HLS and upload it on a web server: gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,payload=96 ! rtph264depay ! mpegtsmux ! hlssink max-files=5 target-duration=5 location=C:/xampp/htdocs/live/segment%%05d

How to view Gstreamer log in Android?

一世执手 提交于 2019-12-23 22:52:43
问题 How to view the output of log functions like GST_CAT_INFO, GST_DEBUG etc in Android environment? Can I view them in logcat? 回答1: The log is written to stderr. You can redirect it to a file (2>debug.log) and download it to your computer. There you can just read it using 'less' or 'more'. Alternatively disable the ansi colors (GST_DEBUG_NO_COLOR=1) and use gst-debug-viewer to interactively browse it. 回答2: There is a method to redirect stdio to the log so it is visible in logcat: Redirect stdout