h.264

Encoding FFMPEG to MPEG-DASH – or WebM with Keyframe Clusters – for MediaSource API

荒凉一梦 提交于 2019-11-27 00:30:51
问题 I'm currently sending a video stream to Chrome, to play via the MediaSource API. As I understand it, MediaSource only supports MP4 files encoded with MPEG-DASH, or WebM files that have clusters beginning with keyframes (otherwise it raises the error: Media segment did not begin with keyframe). Is there any way to encode in MPEG-DASH or keyframed WebM formats with FFMPEG in real-time? Edit: I just tried it with ffmpeg ... -f webm -vcodec vp8 -g 1 so that every frame is a keyframe. Not the

Converting images to video

*爱你&永不变心* 提交于 2019-11-26 22:46:46
问题 how can I convert images to video without using FFmpeg or JCodec, only with android MediaCodec. The images for video is bitmap files that can be ARGB888 or YUV420 (my choice). The most important thing is that the video have to be playable in android devices and the maximum API is 16. I know all about API 18 MediaMuxer and I can not use it. Please help me, I am stuck on this for many days. (JCodec to slow, and FFmpeg very complicated to use). 回答1: There is no simple way to do this in API 16

Use FFMPEG on Android [closed]

混江龙づ霸主 提交于 2019-11-26 20:50:46
Does somebody know how to use FFMPEG on Android to convert YUV420 frame to H.264? I have ported FFMPEG work on Android with NDK, I just don't know how to use it. A source code is appreciated. Eli Konky You have two options: use ffmpeg api - google ffmpeg sample code. this requires good understanding of the api and which is very comprehnsive. compile ffmpeg.c and invoke its main() via jni. This requires that you understand the command line parameters. It is rather cumbersome but works. You need to look out for the static vars defined in ffmpeg.c and reset them every time you invoke the main

Video rendering is broken MediaCodec H.264 stream

◇◆丶佛笑我妖孽 提交于 2019-11-26 20:47:09
问题 I am implementing a decoder using MediaCodec Java API for decoding live H.264 remote stream. I am receiving H.264 encoded data from native layer using a callback ( void OnRecvEncodedData(byte[] encodedData) ), decode and render on Surface of TextureView . My implementation is completed (retrieving encoded streams using callback, decode and rendering etc). Here is my decoder class: public class MediaCodecDecoder extends Thread implements MyFrameAvailableListener { private static final boolean

FFmpeg can't decode H264 stream/frame data

◇◆丶佛笑我妖孽 提交于 2019-11-26 19:45:29
问题 Recently I had chance to work with two devices that are streaming the H264 through RTSP. And I've ran into some problem trying to decompress this stream using FFmpeg library. Every time the " avcodec_decode_video2 " is called - FFmpeg just says something like: [h264 @ 00339220] no frame! My raw H264 stream I frame data starts like this: " 65 88 84 21 3F F8 F8 0D..." (as far as I understand this 0x65 indicates that it's a IDR frame?) Other frames for one device starts like: " 41 9A 22 07 F3 4E

How to use AVSampleBufferDisplayLayer in iOS 8 for RTP H264 Streams with GStreamer?

天大地大妈咪最大 提交于 2019-11-26 19:15:53
问题 After getting notice of the HW-H264-Decoder being available to programmers in iOS 8, I want to use it now. There is a nice introduction to 'Direct Access to Video Encoding and Decoding' from WWDC 2014 out there. You can take a look here. Based on Case 1 there, I started to develop an Application, that should be able to get an H264-RTP-UDP-Stream from GStreamer, sink it into an 'appsink'-element to get direct access to the NAL Units and do the conversion to create CMSampleBuffers, which my

What h.264 format loads on android AND IOS?

非 Y 不嫁゛ 提交于 2019-11-26 18:54:16
问题 Theoretically both IOS and ANDROID will play h.264 files, but I can't figure out a setting to encode them so they actually work cross platform. Does anybody know how to encode for both Android and IOS using one file? p.s. I know all about html5 video and the fallback sources, I just don't want to encode and host a new video for every device that comes down the pike. 回答1: Here's the ffmpeg command line we use to transcode to MPEG-4 h.264 in our production environment. We've tested the output

Raw H264 frames in mpegts container using libavcodec

不打扰是莪最后的温柔 提交于 2019-11-26 17:56:52
问题 I would really appreciate some help with the following issue: I have a gadget with a camera, producing H264 compressed video frames, these frames are being sent to my application. These frames are not in a container, just raw data. I want to use ffmpeg and libav functions to create a video file, which can be used later. If I decode the frames, then encode them, everything works fine, I get a valid video file. (the decode/encode steps are the usual libav commands, nothing fancy here, I took

Playing RTSP with python-gstreamer

断了今生、忘了曾经 提交于 2019-11-26 16:19:09
问题 I use gstreamer for playing RTSP stream from IP cameras (like Axis.) I use a command line like this: gst-launch-0.10 rtspsrc location=rtsp://192.168.0.127/axis-media/media.amp latency=0 ! decodebin ! autovideosink and it work fine. I want to control it with a gui in pygtk so I use the gstreamer python bindings. I've wrote this piece of code: [...] self.player = gst.Pipeline("player") source = gst.element_factory_make("rtspsrc", "source") source.set_property("location", "rtsp://192.168.0.127

Fetching the dimensions of a H264Video stream

一世执手 提交于 2019-11-26 15:50:35
I am trying to fetch the dimensions (Height and width) from a H264 stream. I know that to fetch the same details from a mpeg2 stream you have to look at the four bytes following the sequence header start code ((01B3)). Will the same logic work for H264? Would appreciate any help I get.. Cipi NO!!! You must run a complex function to extract video dimensions from Sequence Parameter Sets. How to do this? Well first you must write your own Exp-Golomb decoder, or find one online... in live555 source code somewhere there is one for example... Then you must get one SPS frame. It has NAL=0x67 (NAL is