mediacodec

How to know Android decoder MediaCodec.createDecoderByType(type) is Hardware or software decoder?

倾然丶 夕夏残阳落幕 提交于 2019-12-01 04:53:30
问题 Is there a way to find out if the decoder that received using MediaCodec.createDecoderByType(type) is a hardware decoder or a software decoder? 回答1: There is no real formal flag for indicating whether a codec is a hardware or software codec. In practice, you can do this, though: MediaCodec codec = MediaCodec.createDecoderByType(type); if (codec.getName().startsWith("OMX.google.")) { // Is a software codec } (The MediaCodec.getName() method is available since API level 18. For lower API levels

Android MediaCodec not working [closed]

亡梦爱人 提交于 2019-12-01 01:09:19
I'm trying to directly decode the H.264 encoded camera output of the Raspberry Pi camera module on an android device, but my code fails to properly decode the file. I get no output, and as the last frame I get a garbled image. As I am parsing the input file myself (it is an H.264 byte-stream) into NAL units, I'm left with a question: when feeding them to the MediaCodec buffers, do I feed the NAL unit separator into the buffer as well? fadden The MediaCodec decoder requires an H.264 elementary stream, and wants one access unit per buffer. You also need to supply SPS/PPS before the first data

How can we make the saveFrame() method in ExtractMpegFramesTest more efficient?

≡放荡痞女 提交于 2019-12-01 00:59:56
[edit] Reformatting into question and answer format following fadden@ suggestion. In ExtractMpegFramesTest_egl14.java.txt , method saveFrame(), there is a loop for reordering RGBA into ARGB for Bitmap png compression (see below quotes from that file), how can this be optimised? // glReadPixels gives us a ByteBuffer filled with what is essentially big-endian RGBA // data (i.e. a byte of red, followed by a byte of green...). We need an int[] filled // with little-endian ARGB data to feed to Bitmap. // ... // So... we set the ByteBuffer to little-endian, which should turn the bulk IntBuffer //

Select H264 Profile when encoding with MediaCodec and MTK Codec

眉间皱痕 提交于 2019-11-30 21:21:32
We have an Android app that encodes video into H264. On all previously tried Android devices this encodes to Baseline profile which is what I need. On the Lenovo Yoga 10 the codec is OMX.MTK.VIDEO.ENCODER.AVC. This encodes the video as High Profile which gives a problem for the receiving device. I am using MediaCodec. There seems to be no way to set the profile to be used. Is there any way of doing this ? The codec does claim to support Baseline profile but gives no way of using it. Is there a codec specific parameter for this? What you could try is to add the key profile to your MediaFormat,

Muxing camera preview h264 encoded elementary stream with MediaMuxer

两盒软妹~` 提交于 2019-11-30 20:51:34
I am working on an implementation of one of the Android Test Cases regarding previewTexture recording with the new MediaCodec and MediaMuxer API's of Android 4.3. I've managed to record the preview stream with a framerate of about 30fps by setting the recordingHint to the camera paremeters. However, I ran into a delay/lag problem and don't really know how to fix that. When recording the camera preview with quite standard quality settings (1280x720, bitrate of ~8.000.000) the preview and the encoded material suffers from occasional lags. To be more specific: This lag occurs about every 2-3

Android: MediaCodec: bad video generated on Nexus for 480x480 while 640x640 works well

时间秒杀一切 提交于 2019-11-30 20:43:27
问题 I am rendering an Mpeg4/avc video on android using the MediaCodec (and MediaMuxer). I'm testing on both LG Nexus 4 & Samsung Galaxy 5. On samsung, the rendered video looks as expected for both 640x640 and 480x480 frame size. BUT, on the Nexus, 480x480 generates a bad looking video, while the 640x640 generates a good video. Quesion is: what is the reason? is this a bug or a "feature" I am not aware of. Is there a well-known frame size we can rely on being rendered correcly on all Android

How to add Audio to Video while Recording [ContinuousCaptureActivity] [Grafika]

瘦欲@ 提交于 2019-11-30 20:34:08
I implement Video recording using ContinuousCaptureActivity.java . it's work perfectly. Now i want to add Audio in this video. I know using MediaMuxer it is possible to add audio in video. But the problem is i don't know how to i use MediaMuxer . Also if you have any other solution without MediaMuxer then share with me any link or doc. also i have demo AudioVideoRecordingSample . But i don't understand how to i merge this with my code. please explain to me if anyone knows. Thanks in Advance. Merging Audio File and Video File private void muxing() { String outputFile = ""; try { File file = new

How can we make the saveFrame() method in ExtractMpegFramesTest more efficient?

风流意气都作罢 提交于 2019-11-30 19:14:42
问题 [edit] Reformatting into question and answer format following fadden@ suggestion. In ExtractMpegFramesTest_egl14.java.txt, method saveFrame(), there is a loop for reordering RGBA into ARGB for Bitmap png compression (see below quotes from that file), how can this be optimised? // glReadPixels gives us a ByteBuffer filled with what is essentially big-endian RGBA // data (i.e. a byte of red, followed by a byte of green...). We need an int[] filled // with little-endian ARGB data to feed to

How to provide both audio data and video data to MediaMux

倖福魔咒の 提交于 2019-11-30 15:40:19
问题 I'm trying to get video raw data by Preview, get audio raw data by AudioRecord. Then I will send them to MediaCodec(I will set two Codec instance). After that I will send the video data and audio data to MediaMux to get a mp4 file. I have two questions: 1) I've used MediaMux to process the video data before. For video data, the MediaMux process it by frame, but the video record is continuous. How could the MediaMux handle the video and audio in synchronization. 2) I found only of variable for

MediaCodec with Surface input: Producing chunked output

ぐ巨炮叔叔 提交于 2019-11-30 15:29:41
问题 I'm trying to produce short sequential mp4 files from CameraPreview data via MediaCodec.createInputSurface() . However, recreating the MediaCodec and it's associated Surface requires stopping the Camera to allow another call to mCamera.setPreviewTexture(...) . This delay results in an unacceptable amount of dropped frames. Therefore I need to generate the CODEC_CONFIG and END_OF_STREAM data periodically without recreating the input Surface, and thus having to call mCamera.setPreviewTexture(..