h.264

How to encode video with transparent background

爷,独闯天下 提交于 2019-12-12 09:52:35
问题 I am encoding a video using cocoa for OSX (with AVAssetWriter) in h264. This is the configuration: // Configure video writer AVAssetWriter *m_videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:@(outputFile)] fileType:AVFileTypeMPEG4 error:NULL]; // configure video input NSDictionary *videoSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : @(m_width), AVVideoHeightKey : @(m_height) }; AVAssetWriterInput* m_writerInput = [[AVAssetWriterInput alloc]

ffmpeg RTSP error while decoding MB

送分小仙女□ 提交于 2019-12-12 08:47:21
问题 I'm using ffmpeg to read an h264 RTSP stream from a Cisco 3050 IP camera and reencode it to disk as h264 (there are reasons why I'm not just using -codec:copy ). The ffmpeg version is as follows: ffmpeg version 3.2.6 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 6.3.0 (Alpine 6.3.0) I've also tried with ffmpeg 2.8.14-0ubuntu0.16.04.1 and the latest ffmpeg built from source (I used this commit) and see the same behaviour as below. The command I'm running is: ffmpeg -rtsp

Mediacodec decoder always times out while decoding H264 file

半腔热情 提交于 2019-12-12 08:15:57
问题 I have been trying to decode a video file which is encoded via H264 encoding with Android's MediaCodec and tried to put the output of the decoder to a surface , but when I run the app it shows a black surface and in DDMS logcat I see that decoder timed out . I have parsed the file into valid frames first [reading 4 bytes first which indicates the length of the upcoming frame and then read length amount bytes which indicates the frame, then again reading 4 bytes for the length of the next

Convert an h264 byte string to OpenCV images

回眸只為那壹抹淺笑 提交于 2019-12-12 07:24:24
问题 In Python, how do I convert an h264 byte string to images OpenCV can read, only keeping the latest image? Long version: Hi everyone. Working in Python, I'm trying to get the output from adb screenrecord piped in a way that allows me to capture a frame whenever I need it and use it with OpenCV. As I understand, I need to constantly read the stream because it's h264. I've tried multiple things to get it working and concluded that I needed to ask for specific help. The following gets me the

Which video encoders are guaranteed to be supported by android MediaCodec API?

不问归期 提交于 2019-12-12 06:17:12
问题 Testing video encoding with the MediaCodec API in several devices, I noticed all of them have encoders for h264, h263, and MPEG-4. Are any of these guaranteed to be supported by all devices which have at least Jelly Bean, even if the actual encoding done by MediaCodec is done by software instead of hardware? 回答1: The Android Compatibility Definition Document (CDD) defines a set of mandatory features. Google "Android <version> CDD" to find the appropriate one. For example, if you open the 4.3

FFMPEG: how to wrap h264 stream into FLV container?

女生的网名这么多〃 提交于 2019-12-12 05:26:22
问题 What I want is straightforward: wrap H.264 video stream into a FLV container. However, ffmpeg just decode the input stream and pack raw video stream into FLV. The details are described below: The input stream is captured from a hardware-encoder video camera, and the FLV will be sent to some video server. Firstly I used following command: $ ffmpeg -framerate 15 -s 320x240 -i /dev/video1 -f flv "rtmp://some.website.com/receive/path" However, the resultant stream is suspicious. The watching side

How to real-timely render Image Data(YUV420SP) decoded by MediaCodec to SurfaceView on Android?

给你一囗甜甜゛ 提交于 2019-12-12 05:05:19
问题 MediaCodec has a limitation FPS of decoding, I want to break that, so I need to render frames by myself, instead of internal feature of MediaCodec. I assume that only RGB565 can be render to SurfaceView on Android platform. I've been searching for many solutions of YUV420->RGB565 on Android, but all solutions need separated Y U V data, but separating YUV420SP data into Y U V would cost much time. How do i fix that? Thanks for all who helps. @Codo if(colorFmt == 21) { int nSize = bufferInfo

Media Foundation h264 encoder poor performance

南笙酒味 提交于 2019-12-12 04:37:35
问题 Media Foundation h264 encoder poor performance I'm writing an application which records PC's screen in realtime and encodes it with Media Foundation h264 codec. Encoding consumes a lot of CPU resources. And after I stop recording video (or pause it by simply stopping feeding an encoder with video and audio frames), CPU load stays very high for a long period of time (5-10 seconds and more). During this time application waits until IMFSinkWriter::Finalize method completes. My PC configuration:

gdcl multiplexer create file with raw video not h264

最后都变了- 提交于 2019-12-12 04:14:08
问题 i have created one graph as per below (i am using osprey card for input live stream) (graphedit tool) Osprey analog video in ----> GDCL MPEG-4 mulitplexer ----> File Writer (.mp4 file) filesize is very big; even 5 seconds file have 80 mb size. file doesn't get played. when i see file detail in ffmpeg with ffmpeg -i, it will give error like stream 0, missing mandatory atoms, broken header below is ffmpeg response. ffmpeg.exe -i "C:\Documents and Setti ngs\Administrator\Desktop\mp4file\mp4file

Changing NALU h.264/avc, for RTP encupsulation

醉酒当歌 提交于 2019-12-12 03:22:46
问题 What I can and cant change in NALU in terms of syntex and size, if the nal is meant for RTP encupsulation? 回答1: You can change whatever you want, provided that resulting bit stream is still compliant to: MPEG-4 Part 10 Specification (H.264) RTP RFCs 3550 (RTP), 3984 (RTP Payload for H.264) 来源: https://stackoverflow.com/questions/7560060/changing-nalu-h-264-avc-for-rtp-encupsulation