h.264

Forcing Mpeg2Demultiplexer to use ffdshow to render H264 Digital TV Video

半世苍凉 提交于 2019-12-04 22:27:14
I spend a lot of time trying to make DTVViewer sample of DirectShow work unfortunately with no success. The video format of DVBT network is H264 and I found that the IntelliConnect behavior of IFilterGraph prefers to use Mpeg2 Video format. For those who want to see the code, here it is. If you do not know anything about DirectShow I shared my experience with this code. And the most probably problem is described in step 5 and 6 of the tutorial. The code for helper function which connects filters: public static void UnsafeConnectFilters(IFilterGraph2 graph, IBaseFilter source, IBaseFilter dest,

SPS values for H 264 stream in iPhone

僤鯓⒐⒋嵵緔 提交于 2019-12-04 21:48:21
Can someone point me to documentation that will help me get correct SPS and PPS values for iPhone. Question is a bit unclear... Picture Parameter Set is described in the latest ITU-T release of the standard in chapter 7.3.2.2 Sequence Parameter Set is described in chapter 7.3.2.1. You can encode a single frame to a file and then extract the sps and pps from that file. I have an example that shows how to do exactly that at http://www.gdcl.co.uk/2013/02/20/iOS-Video-Encoding.html I am sure you know, but you can only save H264 encoded video into a file(.mp4, .mov) on iOS. There is no access to

.h264 sample file

拥有回忆 提交于 2019-12-04 20:32:23
I'm currently using files here , but I get some errors while testing my program. I just want to see if it fails only with this one or with all other .h264 files. So, are there any other sources where I can download (standard) .h264 sample files for test ? Thanks. Option 1: make your own with x264. These are not standard sample files, but you can control which parts of H.264 they use, for example different profile/level/etc, make them I-frame-only, make them have only a particular macroblock type, etc. Also you can make them tiny, e.g. one or a few frames long. Option 2: perhaps the JM software

FFmpeg: Read profile level information from mp4

浪子不回头ぞ 提交于 2019-12-04 19:56:59
问题 I have a mp4 file and need the profile level of it. FFmpeg says, it has baseline profile, which is what I need, but I need also the level. Here is what I get from FFmpeg: ffmpeg version 0.8, Copyright (c) 2000-2011 the FFmpeg developers built on Jul 20 2011 13:32:19 with gcc 4.4.3 configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264

How to extract motion vectors from H.264 AVC CMBlockBufferRef after VTCompressionSessionEncodeFrame

隐身守侯 提交于 2019-12-04 19:20:31
I'm trying read or understand CMBlockBufferRef representation of H.264 AVC 1/30 frame. The buffer and the encapsulating CMSampleBufferRef is created by using VTCompressionSessionRef . https://gist.github.com/petershine/de5e3d8487f4cfca0a1d H.264 data is represented as AVC memory buffer, CMBlockBufferRef from the compressed sample. Without fully decompressing again , I'm trying to extract motion vectors or predictions from this CMBlockBufferRef . I believe that for the fastest performance, byte-by-byte reading from the data buffer using CMBlockBufferGetDataPointer() should be necessary. However

How are access units aligned within PES packets in Apple's HLS?

╄→гoц情女王★ 提交于 2019-12-04 17:43:13
Does Apple specify this? How many access units should one put in a PES packet payload? Also, I'm wondering which prefix start codes (if any) are present in PES packets. I assume that the one preceding the first NAL unit within an access unit is useless and mustn't be put. Right? I'd like to know how it's done specifically in HLS - not necessarily any other MPEG-2 TS application. I'd like to know how it's done specifically in HLS - not necessarily any other MPEG-2 TS application. HLS is a standard MPEG-2 TS stream. HLS does not do it any differently, except limit to a single audio and singe

Manual encoding into MPEG-TS

非 Y 不嫁゛ 提交于 2019-12-04 16:55:26
SO... I am trying to take a H264 Annex B byte stream video and encode it into MPEG-TS in pure Java. My goals is to create a minimal MPEG-TS, Single Program, valid stream and to not include any timing information information (PCR, PTS, DTS). I am currently at the point where my generated file can be passed to ffmpeg (ffmpeg -i myVideo.ts) and ffmpeg reports... [NULL @ 0x7f8103022600] start time is not set in estimate_timings_from_pts Input #0, mpegts, from 'video.ts': Duration: N/A, bitrate: N/A Program 1 Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709),

MFCreateFMPEG4MediaSink does not generate MSE-compatible MP4

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-04 16:15:14
I'm attempting to stream a H.264 video feed to a web browser. Media Foundation is used for encoding a fragmented MPEG4 stream ( MFCreateFMPEG4MediaSink with MFTranscodeContainerType_FMPEG4 , MF_LOW_LATENCY and MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS enabled). The stream is then connected to a web server through IMFByteStream . Streaming of the H.264 video works fine when it's being consumed by a <video src=".."/> tag. However, the resulting latency is ~2sec, which is too much for the application in question. My suspicion is that client-side buffering causes most of the latency. Therefore, I'm

h264 reference frames

匆匆过客 提交于 2019-12-04 14:49:01
I'm looking for a algorithm of finding reference frames in h264 stream. The most common metod I saw in different solutions was finding access unit delimiters and NAL of IDR type. Unfortunatelly most streams I checked didn't have NAL of IDR type. I'll be gratefull for help. Regards Jacek puffadder H264 frames split up by a special tag, called the startcode prefix, which is either of 0x00 0x00 0x01 OR 0x00 0x00 0x00 0x01 . All the data between two startcodes comprises a NAL unit in H264 speak. So what you want to do is search for the startcode prefix in your h264 stream. The byte following the

android mediacodec: real time decoding h264 nals

别来无恙 提交于 2019-12-04 14:08:00
I'm trying to decode h264 nals in real time with android low level media api. Each nal contains one full frame, so i expect that after feeding input with my nal and calling dequeueOutputBuffer it would "immediatly" (with the litle delay of course) display my frame but it doesn't. I see the first frame and the dequeue returns the first buffer only afer feeding the decoder with the second one which at this time should render the second frame. Frames are encoded with zerolatency preset of x264 so no b-frame etc... I guess that there might be a way to set the encoder to render the frame immediatly