h.264

Need to convert h264 stream from annex-b format to AVCC format

拥有回忆 提交于 2019-11-30 04:34:16
I need to convert h264 stream from annex-b format to AVCC format. I tried this to convert from h264 annex-b to AVCC: I extracted the SPS and PPS from the annex stream and created the Extra data. I then looked in the stream for 0x00 0x00 0x00 0x01 (which should be the start of each Nal) and continue looking for another 0x00 0x00 0x00 0x01 (which will be the end of the Nal) then did minus to get the Nal length, then replace the first 0x00 0x00 0x00 0x01 to 0x00 0x00 0x00 [NulSize] but seems that this does not produce valid stream. I then found out that NUL can starts/ends with 0x00 0x00 0x01 so

How to get h264 video info?

瘦欲@ 提交于 2019-11-30 04:28:06
How can I get specific h264 video information from video file? I need to know profile (Baseline/Main/High) and is there B-Frames in movie? I've found out that the best way for this is using FFprobe with -show_streams parameter. It shows both h.264 profile and B-frames usage for video streams of the movie. ffprobe -show_streams -i "file.mp4" [STREAM] index=0 codec_name=h264 codec_long_name=H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 profile=High codec_type=video codec_time_base=1001/48000 codec_tag_string=avc1 codec_tag=0x31637661 width=1920 height=1080 has_b_frames=0 sample_aspect_ratio=0:1

AVAssetWriterInput H.264 Passthrough to QuickTime (.mov) - Passing in SPS/PPS to create avcC atom?

杀马特。学长 韩版系。学妹 提交于 2019-11-30 04:07:19
I have a stream of H.264/AVC NALs consisting of types 1 (P frame), 5 (I frame), 7 (SPS), and 8 (PPS). I want to write them into an .mov file without re-encoding. I'm attempting to use AVAssetWriter to do this. The documentation for AVAssetWriterInput states: Passing nil for outputSettings instructs the input to pass through appended samples, doing no processing before they are written to the output file. This is useful if, for example, you are appending buffers that are already in a desirable compressed format. However, passthrough is currently supported only when writing to QuickTime Movie

MediaCodec H264 Encoder not working on Snapdragon 800 devices

▼魔方 西西 提交于 2019-11-29 22:39:24
I have written a H264 Stream Encoder using the MediaCodec API of Android. I tested it on about ten different devices with different processors and it worked on all of them, except on Snapdragon 800 powered ones (Google Nexus 5 and Sony Xperia Z1). On those devices I get the SPS and PPS and the first Keyframe, but after that mEncoder.dequeueOutputBuffer(mBufferInfo, 0) only returns MediaCodec.INFO_TRY_AGAIN_LATER. I already experimented with different timeouts, bitrates, resolutions and other configuration options, to no avail. The result is always the same. I use the following code to

How do I use Hardware accelerated video/H.264 decoding with directx 11 and windows 7?

↘锁芯ラ 提交于 2019-11-29 22:38:50
I've been researching all day and not gotten very far. I'm on windows 7, using directx 11. (My final output is to be a frame of video onto a DX11 texture) I want to decode some very large H.264 video files, and the CPU (using libav) doesn't cut it. I've looked at the hwaccel capabilities of libav using DXVA2, but hit a road block when I need to create a IDirectXVideoDecoder, which can only be created with a D3D9 interface. (which I don't have using DX11) Whenever I've looked up DXVA documentation, it doesn't reference DX11, was this removed in DX10 or 11? (Can't find any confirmation of this,

Extracting h264 from CMBlockBuffer

爷,独闯天下 提交于 2019-11-29 21:53:59
I am using Apple VideoTool Box (iOS) to compress raw frames captured by the device camera. My callback is being called with a CMSampleBufferRef object that contains CMBlockBuffer. The CMBlockBuffer object contain the H264 elementary stream but I didn't find any way to get a pointer to the elementary stream. When I printed into the console the CMSampleBufferRef object I got: (lldb) po blockBufferRef CMBlockBuffer 0x1701193e0 totalDataLength: 4264 retainCount: 1 allocator: 0x1957c2c80 subBlockCapacity: 2 [0] 4264 bytes @ offset 128 Buffer Reference: CMBlockBuffer 0x170119350 totalDataLength:

How AVSampleBufferDisplayLayer displays H.264

你说的曾经没有我的故事 提交于 2019-11-29 21:08:17
I want to share my knowledge which I worked out in some days about it. There isnt a lot to find about it. I am still fizzeling about the sound. Comments and tips are welcomed. ;-) here my code snippets. Declare it @property (nonatomic, retain) AVSampleBufferDisplayLayer *videoLayer; at first setup the video layer self.videoLayer = [[AVSampleBufferDisplayLayer alloc] init]; self.videoLayer.bounds = self.bounds; self.videoLayer.position = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds)); self.videoLayer.videoGravity = AVLayerVideoGravityResizeAspect; self.videoLayer

Stream H.264 video over rtp using gstreamer

随声附和 提交于 2019-11-29 21:06:32
I am newbie with gstreamer and I am trying to be used with it. My first target is to create a simple rtp stream of h264 video between two devices. I am using these two pipelines: Sender: gst-launch-1.0 -v filesrc location=c:\\tmp\\sample_h264.mov ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000 Receiver: gst-launch-1.0 -v udpsrc port=5000 ! rtpmp2tdepay ! decodebin ! autovideosink But with the first one (the sender) I got the following error: Setting pipeline to PAUSED ... Pipeline is PE*R*O L(LgIsNtG- l.a.u.n h-1.0:5788): CRITICAL **: gst_adapter_map: assertion `size > 0' failed

Convert mp4 to maximum mobile supported MP4 using FFMPEG

為{幸葍}努か 提交于 2019-11-29 20:38:59
I would like to use ffmpeg to convert an mp4 to 'low size' mp4 ... I need an mp4 file with h263 video and aac audio (or some other settings supported by low cost mobile.) My main concern is that the video be playable on most devices. What would be some possible ffmpeg commands to accomplish this? Thanks in advance. There are numerous ways to encode mp4 videos, and encoding them for mobile devices is even more complex. I'm not sure what you mean by "low cost mobile" do you mean low cost as in the device, or the bandwidth needed to play said video? Either way, here a post to get you going: H.264

Hardware accelerated h.264 decoding to texture, overlay or similar in iOS

时光怂恿深爱的人放手 提交于 2019-11-29 20:08:44
Is it possible, and supported, to use the iOS hardware accelerated h.264 decoding API to decode a local (not streamed) video file, and then compose other objects on top of it? I would like to make an application that involves drawing graphical objects in front of a video, and use the playback timer to synchronize what I am drawing on top, to what is being played on the video. Then, based on the user's actions, change what I am drawing on top (but not the video) Coming from DirectX, OpenGL and OpenGL ES for Android, I am picturing something like rendering the video to a texture, and using that