h.264

Decoding h264 in iOS 8 with video tool box

蓝咒 提交于 2019-12-02 21:15:25
Need to decode h264 stream and get the pixel buffers I know its possible with video tool box on iOS 8 1.How do I convert the h264 stream to CMSampleBufferRef ? 2.How do I use the video tool box to decode? I assume you get the stream in Annex B format, if it is already in AVCC format (read MP4), then you can you the AssetReader and do not need to do much. For an Annex B stream (this is what ppl. often call raw h264 stream). extract SPS/PPS NAL units and create a parameter set from then. You receive them periodically. They contain information for the decode how a frame is supposed to be decoded.

Create mp4 files on Android using Jcodec

ぐ巨炮叔叔 提交于 2019-12-02 21:13:13
i have some troubles with writing mp4 files on Android using MediaRecorder and Jcodec, here is my code public class SequenceEncoder { private final static String CLASSTAG = SequenceEncoder.class.getSimpleName(); private SeekableByteChannel ch; private byte[] yuv = null; private ArrayList<ByteBuffer> spsList; private ArrayList<ByteBuffer> ppsList; private CompressedTrack outTrack; private int frameNo; private MP4Muxer muxer; ArrayList<ByteBuffer> spsListTmp = new ArrayList<ByteBuffer>(); ArrayList<ByteBuffer> ppsListTmp = new ArrayList<ByteBuffer>(); // Encoder private MediaCodec mediaCodec =

H.264 codec explained [closed]

断了今生、忘了曾经 提交于 2019-12-02 21:04:12
Closed. This question is off-topic. It is not currently accepting answers. Learn more . Want to improve this question? Update the question so it's on-topic for Stack Overflow. I am making a app which supports video calls and I am looking for a tutorial/doc explaining the structure of the h.264 codec. I want to be able to package the stream, wrap it in datagrams, send and unpack on the receiving side. Any suggestions/reading materials? What do you mean by structure? If you are talking about the bitstream syntax, you can download the H.264 standard for free. There are also many books/papers

In h264 NAL units means frame.?

喜你入骨 提交于 2019-12-02 20:41:27
I am working on a h264 video codec. I want to know: Is a single NAL unit in H264 equivalent to one video frame? No, a sequence of NAL units can be decoded into video frames, but they are not equivalent. http://en.wikipedia.org/wiki/H.264/MPEG-4_AVC http://en.wikipedia.org/wiki/Network_Abstraction_Layer#NAL_Units_in_Byte-Stream_Format_Use http://wiki.multimedia.cx/index.php?title=H.264 来源: https://stackoverflow.com/questions/6858991/in-h264-nal-units-means-frame

How to write a Live555 FramedSource to allow me to stream H.264 live

杀马特。学长 韩版系。学妹 提交于 2019-12-02 19:41:21
I've been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar. What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I've made it a NAL packet which I update each frame. I'm quite new to codecs and streaming,

Parsing H264 in mdat MP4

被刻印的时光 ゝ 提交于 2019-12-02 19:28:10
I have a file that only contains the mdat atom in a MP4 container. The data in the mdat contains AVC data. I know the encoding parameters for the data. The format does not appear to be in the Annex B byte stream format. I am wondering how I would go about parsing this. I have tried searching for the slice header, but have not had much luck. Is it possible to parse the slices without the NAL's? AVC NAL units are in the following format in MDAT section: [4 bytes] = NAL length, network order; [NAL bytes] Shortly, start codes are simply replaced by lengths. Be careful! The NAL Length is not

H264 frame viewer

家住魔仙堡 提交于 2019-12-02 18:30:57
Do you know any application that will display me all the headers/parameters of a single H264 frame? I don't need to decode it, I just want to see how it is built up. Fredrik Pihl Three ways come to my mind (if you are looking for something free, otherwise google "h264 analysis" for paid options): Download the h.264 parser from (from this thread @ doom9 forums) Download the h.264 reference software libh264bitstream provides h.264 bitstream reading/writing This should get you started. By the way, the h.264 bitstream is described in Annex. B. in the ITU specs . I had the same question. I tried

convert H264 video to raw YUV format

北城以北 提交于 2019-12-02 17:32:52
Is it possible to create a raw YUV video from H264 encoded video using ffmpeg? I want to open the video with matlab and access Luma, Cb and Cr components frame by frame. alexbuisson Yes you can, you just have to specific the pixel format. To get the whole list of the format: ffmpeg -pix_fmts | grep -i pixel_format_name For example if you want to save the 1st video track of an mp4 file as a yuv420p ( p means planar ) file: ffmpeg -i video.mp4 -c:v rawvideo -pix_fmt yuv420p out.yuv 来源: https://stackoverflow.com/questions/20609760/convert-h264-video-to-raw-yuv-format

H.264 conversion with FFmpeg (from a RTP stream)

不羁的心 提交于 2019-12-02 17:11:27
Environment: I have an IP Camera, which is capable of streaming it's data over RTP in a H.264 encoded format. This raw stream is recorded from the ethernet. With that data I have to work. Goal: In the end I want to have a *.mp4 file, which I can play with common Media Players (like VLC or Windows MP). What have I done so far: I take that raw stream data I have and parse it. Since the data has been transmitted via RTP I need to take care of the NAL Bytes, SPS and PPS. 1. Write a raw file First I determine the type of each frame received over Ethernet. To do so, I parse the first two bytes of

H.264 over RTP - Identify SPS and PPS Frames

為{幸葍}努か 提交于 2019-12-02 16:00:39
I have a raw H.264 Stream from an IP Camera packed in RTP frames. I want to get raw H.264 data into a file so I can convert it with ffmpeg . So when I want to write the data into my raw H.264 file I found out it has to look like this: 00 00 01 [SPS] 00 00 01 [PPS] 00 00 01 [NALByte] [PAYLOAD RTP Frame 1] // Payload always without the first 2 Bytes -> NAL [PAYLOAD RTP Frame 2] [... until PAYLOAD Frame with Mark Bit received] // From here its a new Video Frame 00 00 01 [NAL BYTE] [PAYLOAD RTP Frame 1] .... So I get the SPS and the PPS from the Session Description Protocol out of my preceding