video

Splitting webm video to png with transparency

有些话、适合烂在心里 提交于 2021-02-07 08:11:38
问题 I need to split a webm encoded video into png frames, without losing transparency. I use the following ffmpeg command: ffmpeg -i dancer1.webm -pix_fmt rgba frames/%04d.png This produces a directory of pngs, but why is each output frame is missing transparency? I have used this example video, which contains an alpha channel. See it playing over a background here. Here's an example output frame from ffmpeg: ffmpeg produces the following output when it runs: ffmpeg version N-60294-g549f052

How to record video with webcam in Safari on iOS and macOS?

青春壹個敷衍的年華 提交于 2021-02-07 08:00:32
问题 I've released several paths: 1) Recording video with https://caniuse.com/#feat=html-media-capture But it works only on iOS and cannot be customizable. I need to render a red frame over the video preview layer and limit video length to 30 seconds. 2) Recording with a WebRTC client placed on the server, but I can't find any software to do that. I've found kurento media server, but its client js utils library does not support Safari 11. 3) Recording with flash plugin. But it is not supported on

Comparing Media Source Extensions (MSE) with WebRTC

南楼画角 提交于 2021-02-07 07:15:29
问题 What are the fundamental differences between Media Source Extensions and WebRTC? If I may project my own understanding for a moment. WebRTC includes an the RTCPeerConnection which handles getting streams from Media Streams and passing them into a protocol for streaming to connected peers of the application. It seems under the hood WebRTC abstracting a lot of the bigger issues like codecs and transcoding. Would this be a correct assessment? Where does Media Source Extensions fit into things? I

Comparing Media Source Extensions (MSE) with WebRTC

北战南征 提交于 2021-02-07 07:14:52
问题 What are the fundamental differences between Media Source Extensions and WebRTC? If I may project my own understanding for a moment. WebRTC includes an the RTCPeerConnection which handles getting streams from Media Streams and passing them into a protocol for streaming to connected peers of the application. It seems under the hood WebRTC abstracting a lot of the bigger issues like codecs and transcoding. Would this be a correct assessment? Where does Media Source Extensions fit into things? I

How to resize frame's from video with aspect ratio

六眼飞鱼酱① 提交于 2021-02-07 06:55:52
问题 I am using Python 2.7, OpenCV. I have written this code. import cv2 vidcap = cv2.VideoCapture('myvid2.mp4') success,image = vidcap.read() count = 0; print "I am in success" while success: success,image = vidcap.read() resize = cv2.resize(image, (640, 480)) cv2.imwrite("%03d.jpg" % count, resize) if cv2.waitKey(10) == 27: break count += 1 I am working with video and am dividing the video into individual frames, as a .jpg images. I am also at the same time resizing the frames to dimension

No decoder available for type 'video/x-h264

隐身守侯 提交于 2021-02-07 06:25:09
问题 I am trying to run one of qt sample apps without any modification. It is called player and it is a multimedia widget demonstration. My system is Ubuntu 16.04 64bit. When I try to play a video, I see the following error in the console: No decoder available for type 'video/x-h264 Here is the full error after trying two different videos: Starting /home/aras/Qt5.7.0_Sept2016/Examples/Qt-5.7/multimediawidgets/build-player-Sept2016-Debug/player... Warning: "No decoder available for type 'video/x

Extract audio from video with FFMPEG but not the same duration

烂漫一生 提交于 2021-02-07 04:32:04
问题 My problem is that i need to extract with FFMPEG the audio contained in a video with the same duration. But for some files that i tested, the audio's duration is sometimes shorter than the video's duration. I need to have the exact same duration between the audio and the video file. The command that i have already tried is this following: ffmpeg -i input_video.mp4 output_audio.wav How can i fix this with options in my command ? 回答1: I found the solution. To get an audio extract with the exact

Video compatibility issue: android recorded video not played in iphone

情到浓时终转凉″ 提交于 2021-02-07 03:01:51
问题 I am recording a video in android like this List<Camera.Size> list = myCamera.getParameters().getSupportedPictureSizes(); Parameters parameters = myCamera.getParameters(); parameters.setColorEffect(coloreffects.get(index_color_effect)); myCamera.setParameters(parameters); mediaRecorder = new MediaRecorder(); myCamera.unlock(); mediaRecorder.setCamera(myCamera); mediaRecorder.setOrientationHint(90); mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER); mediaRecorder.setVideoSource

Video background in flutter

旧巷老猫 提交于 2021-02-07 02:48:29
问题 It is possible to have a background video playing all time in flutter? i was looking for some packages and trying to make it function but i dont know how. maybe using something like this but with video. decoration: new BoxDecoration( image: new DecorationImage( image: new AssetImage("images/f1.jpg"), fit: BoxFit.cover, ),), inside a container. 回答1: Try this package https://pub.dartlang.org/packages/video_player the example provided is pretty straight forward to follow. You can then just place

Use javascript to detect if an MP4 video has a sound track

谁说我不能喝 提交于 2021-02-07 02:46:18
问题 I am creating a custom controller for MP4 video on a web page. The controller includes a volume slider. Some of the videos that are to be played have no sound track. It would be good to disable the volume slider for these videos, so that the user is not confused when changing the position of the volume slider has no effect. Is there a property or a trick for checking if an MP4 file has an audio track? (jQuery is an option). Edit: using @dandavis's suggestion, I now have this solution for