aac

视频文件的容器格式和编码格式

荒凉一梦 提交于 2019-11-28 08:02:31
一、概述 我们常见的视频文件,大多为 mkv、mov、mp4 等扩展名。一般情况下,我们粗略地根据扩展名判断文件类型。但实际上,科学的分类方法如下: 编码:指的是媒体文件中音视频的编码,如:H.264、AAC 容器:可根据文件扩展名区分,用于将多部分内容整合,包括: 视频 音频,可以是多音轨,如:一部影片同时具备多种语言 字幕:一部影片也可以包含多种语言的内置字幕 二、常见的文件(容器)格式 AVI (.avi)   其含义是 Audio Video Interactive,就是把视频和音频编码混合在一起储存,是最常见的音频视频容器。支持的视频音频编码也是最多的。AVI 也是最长寿的格式,已存在 10 余年了,虽然发布过改版(V2.0 于 1996 年发布),但已显老态。 MPG (.mpg/.mpeg/.dat)   MPEG 编码采用的音频视频容器,具有流的特性,里面又分为 PS、TS 等,PS 主要用于 DVD 存储,TS 主要用于 HDTV。 VOB (.vob)   DVD 采用的音频视频容器格式(即视频 MPEG-2,音频用 AC3 或者 DTS),支持多视频多音轨多字幕章节等。 MP4   MPEG-4 编码采用的音频视频容器,基于 QuickTime MOV 开发,具有许多先进特性。 3GP   3GPP 视频采用的格式,主要用于流媒体传送。 ASF (.wmv/

How do I use CoreAudio's AudioConverter to encode AAC in real-time?

若如初见. 提交于 2019-11-28 06:38:57
All the sample code I can find that uses AudioConverterRef focuses on use cases where I have all the data up-front (such as converting a file on disk). They commonly call AudioConverterFillComplexBuffer with the PCM to be converted as the inInputDataProcUserData and just fill it in in the callback. (Is that really how it's supposed to be used? Why does it need a callback, then?) For my use case, I'm trying to stream aac audio from the microphone, so I have no file, and my PCM buffer is being filled in in real time. Since I don't have all the data up-front, I've tried doing *ioNumberDataPackets

Decoding AAC using MediaCodec API on Android

独自空忆成欢 提交于 2019-11-27 20:16:19
I'm trying to used the MediaCodec API on Android to decode an AAC stream. (It's raw AAC.) I tried using the MediaFormat.createAudioFormat() to create the format object to pass to MediaCodec.configure(), but I kept getting errors when using AAC (audio/mp4a-latm). (It works with MP3 (audio/mpeg) though...) Finally I created a MediaExtractor for an AAC file and looked at the format object it was producing. I saw that it included the key "csd-0" for a ByteBuffer composed of two bytes both with the value 0x12. If I include that key and value in the format object that I used to configure the AAC

Encoding AAC Audio using AudioRecord and MediaCodec on Android

走远了吗. 提交于 2019-11-27 20:08:49
问题 I am trying to encode aac audio using android AudioRecord and MediaCodec. I have created a encoder class very similar to (Encoding H.264 from camera with Android MediaCodec). With this class, I created an instance of AudioRecord and tell it to read off its byte[] data to the AudioEncoder (audioEncoder.offerEncoder(Data)). while(isRecording) { audioRecord.read(Data, 0, Data.length); audioEncoder.offerEncoder(Data); } Here is my Setting for my AudioRecord int audioSource = MediaRecorder

How to play m3u8 on Android?

别来无恙 提交于 2019-11-27 13:21:32
As i understood, Android 3.0 and above are able for play radio streaming m3u8 - http://developer.android.com/guide/appendix/media-formats.html I put this link - http://content.mobile-tv.sky.com/content/ssna/live/ssnraudio.m3u8 into MediaPlayer but in LogCat i get: 06-01 09:04:44.287: INFO/LiveSession(33): onConnect 'http://content.mobile-tv.sky.com/content/ssna/live/ssnraudio.m3u8' 06-01 09:04:44.287: INFO/NuHTTPDataSource(33): connect to content.mobile-tv.sky.com:80/content/ssna/live/ssnraudio.m3u8 @0 06-01 09:04:44.747: INFO/NuHTTPDataSource(33): connect to content.mobile-tv.sky.com:80

Encode audio to aac with libavcodec

夙愿已清 提交于 2019-11-27 13:08:28
问题 I'm using libavcodec (latest git as of 3/3/10) to encode raw pcm to aac (libfaac support enabled). I do this by calling avcodec_encode_audio repeatedly with codec_context->frame_size samples each time. The first four calls return successfully, but the fifth call never returns. When I use gdb to break, the stack is corrupt. If I use audacity to export the pcm data to a .wav file, then I can use command-line ffmpeg to convert to aac without any issues, so I'm sure it's something I'm doing wrong

Rtmp AAC基本格式(转)

杀马特。学长 韩版系。学妹 提交于 2019-11-27 10:31:40
第一个audio data包:AAC sequence header 第二个audio data包:AAC raw AF表示的含义: 1)第一个字节af,a就是10代表的意思是AAC, Format of SoundData. The following values are defined: 0 = Linear PCM, platform endian 1 = ADPCM 2 = MP3 3 = Linear PCM, little endian 4 = Nellymoser 16 kHz mono 5 = Nellymoser 8 kHz mono 6 = Nellymoser 7 = G.711 A-law logarithmic PCM 8 = G.711 mu-law logarithmic PCM 9 = reserved 10 = AAC 11 = Speex 14 = MP3 8 kHz 15 = Device-specific sound Formats 7, 8, 14, and 15 are reserved. AAC is supported in Flash Player 9,0,115,0 and higher. Speex is supported in Flash Player 10 and higher. 2)第一个字节中的后四位f代表如下

ffmpeg命令详解(转)

耗尽温柔 提交于 2019-11-27 09:31:06
摘自:https://www.cnblogs.com/AllenChou/p/7048528.html FFmpeg是一套可以用来记录、转换数字音频、视频,并能将其转化为流的开源计算机程序。采用LGPL或GPL许可证。它提供了录制、转换以及流化音视频的完整解决方案。它包含了非常先进的音频/视频编解码库libavcodec,为了保证高可移植性和编解码质量,libavcodec里很多code都是从头开发的。[百度百科] ffmpeg使用语法 ffmpeg使用语法: ffmpeg [[options][`-i' input_file]]... {[options] output_file}... 如果没有输入文件,那么视音频捕捉就会起作用。 作为通用的规则,选项一般用于下一个特定的文件。如果你给 –b 64选项,改选会设置下一个视频速率。对于原始输入文件,格式选项可能是需要的。 缺省情况下,ffmpeg试图尽可能的无损转换,采用与输入同样的音频视频参数来输出。 3.选项 a) 通用选项 -L license -h 帮助 -fromats 显示可用的格式,编解码的,协议的。。。 -f fmt 强迫采用格式fmt -I filename 输入文件 -y 覆盖输出文件 -t duration 设置纪录时间 hh:mm:ss[.xxx]格式的记录时间也支持 -ss position

PCM -> AAC (Encoder) -> PCM(Decoder) in real-time with correct optimization

谁说我不能喝 提交于 2019-11-27 06:14:35
I'm trying to implement AudioRecord (MIC) -> PCM -> AAC Encoder AAC -> PCM Decode -> AudioTrack?? (SPEAKER) with MediaCodec on Android 4.1+ (API16). Firstly, I successfully (but not sure correctly optimized) implemented PCM -> AAC Encoder by MediaCodec as intended as below private boolean setEncoder(int rate) { encoder = MediaCodec.createEncoderByType("audio/mp4a-latm"); MediaFormat format = new MediaFormat(); format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm"); format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1); format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100); format.setInteger

Developing the client for the icecast server

感情迁移 提交于 2019-11-27 01:29:47
问题 I am developing the client for the icecast server (www.icecast.org). Can anybody tell me, what is the format they are using for streaming the content? I was looking on their pages, but there is no information about the stream format at all. I have then checked the Wireshark trace and due to my understanding the format of the audio data I am receiving within the 200 OK response to the GET request it is just a plain binary audio data without any metadata included, so comparing to the SHOUTcast