aac

ffmpeg编码aac文件通过extradata添加adts头

旧街凉风 提交于 2019-12-21 04:59:20
转载一篇文章: 1:ffmpeg的aac通过pcm编码得到的数据是latm的,如果需要存成adts能播的文件需要加头,ffmpeg并没有给相关的filter,通过extradata自己做解析加头; 2:aac_adtstoasc这个filter是把adts转成latm; 3:h264_mp4toannexb是将4个字节长度前缀的h264转成00 00 00 01 前缀的能播放的h264; 下面为转载; 转载链接: https://blog.csdn.net/lichen18848950451/article/details/78266054 根据雷神的代码,可以获取mp3等音频文件。网址是:http://blog.csdn.net/leixiaohua1020/article/details/39767055 同时,还可以把数据回调出去,进行其他的处理,是没有任何问题的。可是,现在的视频文件大都是H264+AAC。可是,根据雷神的代码是获取的数据,在播放器上播放失败。这是由于获取的aac数据是缺少adts文件头,添加上去就可以了。说着容易,可是做就难了。 网上的文章大都是介绍adts的,进行处理的代码很少。不过还是找到了,网址是http://blog.csdn.net/leixiaohua1020/article/details/39767055 不过

ExtAudioFileWrite to m4a/aac failing on dual-core devices (ipad 2, iphone 4s)

谁说我不能喝 提交于 2019-12-18 13:34:00
问题 I wrote a loop to encode pcm audio data generated by my app to aac using Extended Audio File Services. The encoding takes place in a background thread synchronously, and not in real-time. The encoding works flawlessly on ipad 1 and iphone 3gs/4 for both ios 4 and 5. However, for dual-core devices (iphone 4s, ipad 2) the third call to ExtAudioFileWrite crashes the encoding thread with no stack trace and no error code. Here is the code in question: The data formats AudioStreamBasicDescription

What is the difference between M4A and AAC Audio Files?

别来无恙 提交于 2019-12-18 12:13:23
问题 is there a difference between M4A audio files and AAC audio files or are they exactly the same thing but with a different file extension ? 回答1: .M4A files typically contain audio only and are formatted as MPEG-4 Part 14 files ( .MP4 container). .AAC is not a container format and instead it is a raw MPEG-4 Part 3 bitstream with audio stream encoded. Note that M4A does not have to contain exactly AAC audio, there are other valid options as well. 回答2: There are raw video and audio streams, this

ffmpeg wrong audio file after conversion in AAC

喜夏-厌秋 提交于 2019-12-17 16:52:53
问题 I have a problem with encoding in acc with FFmpeg. I have au mp4 file with aac audio. I tried to copy the audio with ffmpeg. In the source mp4 file, the first audio noise appears at 0.30 seconds. After conversion using ffmpeg -i inputfile.mp4 -c:a copy outputfile.aac , the resulting file is wrong, the first audio noise appears at 0.32 seconds. The duration of the file is not the same too. When i force the encoder to libfaac, it works but the file is too big. So why it doesn't work when the

Decoding AAC using MediaCodec API on Android

非 Y 不嫁゛ 提交于 2019-12-17 15:50:12
问题 I'm trying to used the MediaCodec API on Android to decode an AAC stream. (It's raw AAC.) I tried using the MediaFormat.createAudioFormat() to create the format object to pass to MediaCodec.configure(), but I kept getting errors when using AAC (audio/mp4a-latm). (It works with MP3 (audio/mpeg) though...) Finally I created a MediaExtractor for an AAC file and looked at the format object it was producing. I saw that it included the key "csd-0" for a ByteBuffer composed of two bytes both with

PCM -> AAC (Encoder) -> PCM(Decoder) in real-time with correct optimization

耗尽温柔 提交于 2019-12-17 08:11:13
问题 I'm trying to implement AudioRecord (MIC) -> PCM -> AAC Encoder AAC -> PCM Decode -> AudioTrack?? (SPEAKER) with MediaCodec on Android 4.1+ (API16). Firstly, I successfully (but not sure correctly optimized) implemented PCM -> AAC Encoder by MediaCodec as intended as below private boolean setEncoder(int rate) { encoder = MediaCodec.createEncoderByType("audio/mp4a-latm"); MediaFormat format = new MediaFormat(); format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm"); format.setInteger

RTP AAC Packet Depacketizer

℡╲_俬逩灬. 提交于 2019-12-14 02:28:14
问题 I asked earlier about H264 at RTP H.264 Packet Depacketizer My question now is about the audio packets. I noticed via the RTP packets that audio frames like AAC, G.711, G.726 and others all have the Marker Bit set. I think frames are independent. am I right? My question is: Audio is small, but I know that I can have more than one frame per RTP ​​packet. Independent of how many frames I have, they are complete? Or it may be fragmented between RTP packets. 回答1: The difference between audio and

Streaming AAC audio in an Android application

允我心安 提交于 2019-12-13 18:23:15
问题 I am trying to program a simple Android app that will stream an internet radio station (I have the url), but the stream is aac audio. I am aware of COREaac, but there isn't alot of documentation about it. Is there a separate decoding library I need to get this app to work? Any suggestions would be helpful or if anyone has had a similar issue and resolved it. Thanks 回答1: All what you need is here: AACPlayer - android 来源: https://stackoverflow.com/questions/9009204/streaming-aac-audio-in-an

FFmpeg check channels of a 7.1 audio for silence

时光总嘲笑我的痴心妄想 提交于 2019-12-13 12:23:20
问题 This is a follow-up question from my previous question asked here, where I needed to look for silence within a specific audio track. Here is the ffmpeg life-saver solution where helps to get some metadata: ffmpeg -i file -map 0:a:1 -af astats -f null - But I have other type of input .mp4 files where they have one single track of 8 (i.e. 7.1) audio channels. Apparently these files are transcoded from an original file (somehow the 4 track stereos are squished into these files). Now similar to

How to play live AAC stream on Android with html5 audio element

别来无恙 提交于 2019-12-13 00:48:38
问题 I am trying to embed an html5 audio tag in a page to allow playing a live AAC+ stream coming from an Icecast server. According to the media formats developer's guide, Android supports playback for several AAC flavors, either inside an MPEG-4 container or in ADTS. I have successfully played AAC-encoded audio files in an MPEG-4 container, thus: <audio controls="controls"> <source src="http://www.example.com/audio/program1.mp4" type="audio/mp4"/> </audio> However, I have not been able to play