gstreamer

How to rip the audio from a video?

喜欢而已 提交于 2019-12-01 18:22:50
I am on ubuntu and want to convert a mp4 video to an mp3 audio file but can't figure out how. I tried installing ffmpeg but it failed to encode the mp3. I've read the gstreamer does it but I can't figure out how. I have gstreamer and python installed. I can program with python, but am not super comfortable compiling software from source or any higher level command line stuff. I only know the basics on the command line. mplayer <videofile> -dumpaudio -dumpfile out.bin it will copy the raw audio stream, that should then be easily converted using sox, lame, vlc or whatnot. VLC has nice conversion

gstreamer critical error when trying to capture video using webcam python opencv

﹥>﹥吖頭↗ 提交于 2019-12-01 15:47:14
i'm trying to take a video with webcam using opencv and python with a simple code import numpy as np import cv2 cap = cv2.VideoCapture(0) print('cap.isOpened') if cap.isOpened(): print ('cap is opened') while(True): re,img=cap.read() cv2.imshow("video output", img) k = cv2.waitKey(10)&0xFF if k==27: break cap.release() cv2.destroyAllWindows() it's working fine if i try to play an existing video such as .mp4 file. but when i try using a webcam i got an error GStreamer-CRITICAL **: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed cap.isOpened for more information i'm using

gstreamer critical error when trying to capture video using webcam python opencv

冷暖自知 提交于 2019-12-01 14:31:30
问题 i'm trying to take a video with webcam using opencv and python with a simple code import numpy as np import cv2 cap = cv2.VideoCapture(0) print('cap.isOpened') if cap.isOpened(): print ('cap is opened') while(True): re,img=cap.read() cv2.imshow("video output", img) k = cv2.waitKey(10)&0xFF if k==27: break cap.release() cv2.destroyAllWindows() it's working fine if i try to play an existing video such as .mp4 file. but when i try using a webcam i got an error GStreamer-CRITICAL **: gst_element

Get the window handle in PyGI

眉间皱痕 提交于 2019-12-01 13:05:12
In my program I use PyGObject/PyGI and GStreamer to show a video in my GUI. The video is shown in a Gtk.DrawingArea and therefore I need to get it's window-handle in the realize -signal-handler. On Linux I get that handle using: drawing_area.get_property('window').get_xid() But how do I get the handle on Windows? I searched on the internet but found only examples for PyGtk using window.handle which does not work using PyGI. The GStreamer documentation provides an example which uses the GDK_WINDOW_HWND macro to get the handle. This macro uses AFAIK gdk_win32_drawable_get_handle . But how to do

Split MPEG-TS into MP4 files with gstreamer 1.12.2

ぃ、小莉子 提交于 2019-12-01 11:53:50
问题 I have a MPEG-TS file which contains two video/audio stream-pairs: $ gst-discoverer-1.0 Recorder_Aug01_12-30-39.ts Analyzing Recorder_Aug01_12-30-39.ts Done discovering Recorder_Aug01_12-30-39.ts Topology: container: MPEG-2 Transport Stream audio: MPEG-2 AAC audio: MPEG-4 AAC video: H.264 (High Profile) audio: MPEG-2 AAC audio: MPEG-4 AAC video: H.264 (High Profile) Properties: Duration: 0:01:49.662738259 Seekable: yes Tags: audio codec: MPEG-2 AAC video codec: H.264 Now I would like to

OpenCV 3.0.0 error with Gstreamer

不羁岁月 提交于 2019-12-01 11:04:44
I just installed OpenCV 3.0 following this tutorial: http://rodrigoberriel.com/2014/10/installing-opencv-3-0-0-on-ubuntu-14-04/ I didn't encounter any error during the installation process. However, when I tried running the a sample program such as the following, cd cpp/ ./cpp-example-facedetect lena.jpg // (../data/lena.jpg) OpenCV 3.0 beta ./cpp-example-houghlines pic1.png // (../data/pic1.jpg) OpenCV 3.0 beta I get the following error: Processing 1 lena.jpg GStreamer: Error opening bin: Unrecoverable syntax error while parsing pipeline lena.jpg Capture from AVI didn't work init done opengl

GStreamer force decodebin2 output type

落爺英雄遲暮 提交于 2019-12-01 09:23:55
I'm trying to write a program in C which replicates the pipeline: gst-launch -v filesrc location="bbb.mp4" ! decodebin2 ! ffmpegcolorspace ! autovideosink DecodeBin2 has a dynamic pad and I've attached a callback to handle its creation. I am unable to link it to ffmpegcolorspace however because the pad capability is always video/quicktime. I would like it to be video/x-raw-yuv or something else which is compatible with ffmpegcolorspace. Is this possible to force/select the output type of decodebin2? Thanks. EDIT: Please do not recommend playbin. I'm trying to learn how how to make pipelines.

GStreamer force decodebin2 output type

谁都会走 提交于 2019-12-01 05:42:16
问题 I'm trying to write a program in C which replicates the pipeline: gst-launch -v filesrc location="bbb.mp4" ! decodebin2 ! ffmpegcolorspace ! autovideosink DecodeBin2 has a dynamic pad and I've attached a callback to handle its creation. I am unable to link it to ffmpegcolorspace however because the pad capability is always video/quicktime. I would like it to be video/x-raw-yuv or something else which is compatible with ffmpegcolorspace. Is this possible to force/select the output type of

Streaming using GStreamer

ぐ巨炮叔叔 提交于 2019-12-01 02:38:23
I have got one HD video "ed_hd.avi" on System#1. Would like to stream it over network and play the content from System#2. I am using GStreamer on Ubuntu 11.04, tried a lot on this. Variety of errors makes this objective difficult to diagnose. Will be thankful for getting a working command for the System#1-end and System#2-end. What I have tried is as follows: System #1: gst-launch filesrc location=ed_hd.avi ! decodedin ! x263enc ! video/x-h264 ! rtph264pay ! udpsink host=127.0.0.1 port=5000 System #2: gst-launch udpsrc port=5000 ! rtph264depay ! decodebin ! xvimagesink Objective is : Convert

Opening a GStreamer pipeline from OpenCV with VideoWriter

让人想犯罪 __ 提交于 2019-12-01 01:45:24
I am capturing and processing video frames with OpenCV, and I would like to write them as a h265 video file. I am struggling to get a proper Gstreamer pipeline to work from OpenCV. Gstreamer works fine by itself. In particular, I am able to run this command, which encodes video very quickly (thanks to GPU acceleration) and saves it to a mkv file: gst-launch-1.0 videotestsrc num-buffers=90 ! 'video/x-raw, format=(string)I420, width=(int)640, height=(int)480' ! omxh265enc ! matroskamux ! filesink location=test.mkv Now I would like to do the same thing from within my OpenCV application. My code