mpeg-dash

Combine MPEG-DASH segments (ex, init.mp4 + segments.m4s) back to a full source.mp4?

两盒软妹~` 提交于 2019-11-28 06:05:15
GPAC, http://gpac.wp.mines-telecom.fr/ , can be used to do video segmentation along with MPEG-DASH spec. One type of results is a combination of init files (ex, init.mp4) and several roughly fixed-duration segments (ex, segment-%d.m4s). What if I just got those results and I like to reverse/combine them back to one full source.mp4 file? Can I use GPAC or ffmpeg for this? You can just use the cat command or similar tools to do this job: cat init.mp4 > source.mp4 cat segment-1.m4s >> source.mp4 cat segment-2.m4s >> source.mp4 ... To do this automatically for all segments in the current folder,

Flush & Latency Issue with Fragmented MP4 Creation in FFMPEG

为君一笑 提交于 2019-11-27 19:55:19
I'm creating a fragmented mp4 for html5 streaming, using the following command: -i rtsp://172.20.28.52:554/h264 -vcodec copy -an -f mp4 -reset_timestamps 1 -movflags empty_moov+default_base_moof+frag_keyframe -loglevel quiet - "-i rtsp://172.20.28.52:554/h264" because the source is h264 in rtp packets stream from an ip camera. For the sake of testing, the camera is set with GOP of 1 (i.e. all frames are key frames) "-vcodec copy" because I don't need transcoding, only remuxing to mp4. "-movflags empty_moov+default_base_moof+frag_keyframe" to create a fragmented mp4 according to the media

Low Latency DASH Nginx RTMP

一曲冷凌霜 提交于 2019-11-27 17:11:28
问题 I use arut nginx-rtmp-module (https://github.com/arut/nginx-rtmp-module) on the media server, then I tried to stream using FFmpeg to the dash application, then I test the stream by playing it using VLC. And it waits around 30secs to start playing, and it plays from the beginning, not the current timestamp. This is my current config on the RTMP block rtmp { server { listen 1935; application live { live on; exec ffmpeg -re -i rtmp://localhost:1935/live/$name -c:a libfdk_aac -b:a 32k -c:v

Live streaming dash content using mp4box

断了今生、忘了曾经 提交于 2019-11-27 07:01:29
I'm trying to live stream H.264 content to HTML5 using the media source extensions API. The following method works pretty well: ffmpeg -i rtsp://10.50.1.29/media/video1 -vcodec copy -f mp4 -reset_timestamps 1 -movflags frag_keyframe+empty_moov -loglevel quiet out.mp4 and then: mp4box -dash 1000 -frag 1000 -frag-rap out.mp4 I can take the MP4Box output ( out_dashinit.mp4 ) and send it through Web Sockets, chunk by chunk, to a JavaScript client that feeds it to the media source API. However, this is not a good method for live content. What I'm trying to do now, is to create a single pipeline in

Combine MPEG-DASH segments (ex, init.mp4 + segments.m4s) back to a full source.mp4?

前提是你 提交于 2019-11-27 05:35:00
问题 GPAC, http://gpac.wp.mines-telecom.fr/, can be used to do video segmentation along with MPEG-DASH spec. One type of results is a combination of init files (ex, init.mp4) and several roughly fixed-duration segments (ex, segment-%d.m4s). What if I just got those results and I like to reverse/combine them back to one full source.mp4 file? Can I use GPAC or ffmpeg for this? 回答1: You can just use the cat command or similar tools to do this job: cat init.mp4 > source.mp4 cat segment-1.m4s >> source

Low latency (< 2s) live video streaming HTML5 solutions? [closed]

谁说胖子不能爱 提交于 2019-11-26 09:19:31
问题 With Chrome disabling Flash by default very soon I need to start looking into flash/rtmp html5 replacement solutions. Currently with Flash + RTMP I have a live video stream with < 1-2 second delay. I\'ve experimented with MPEG-DASH which seems to be the new industry standard for streaming but that came up short with 5 second delay being the best I could squeeze from it. For context, I am trying to allow user\'s to control physical objects they can see on the stream, so anything above a couple