audiounit

Apple's Voice Processing Audio Unit ( kAudioUnitSubType_VoiceProcessingIO ) broken on iOS 5.1

倖福魔咒の 提交于 2019-11-29 05:02:11
I'm writing a VOIP app for iPad (currently targeting 2&3). I originally wrote the audio code using the Audio Unit API, with a kAudioUnitSubtype_RemoteIO unit. This worked well, but unsurprisingly echo was a problem. I tried to use the built-in echo-suppression by switching to using a kAudioUnitSubType_VoiceProcessingIO unit. This works really well on iOS 6 (iPad 3), but the same code on iOS 5.1 (iPad 2) produces white noise on the microphone input. The documentation just mentions that it should be available in iOS 3.0 and later The iOS version seems to be the important difference here. I tried

How to make a simple EQ AudioUnit (bass, mid, treble) with iOS?

我只是一个虾纸丫 提交于 2019-11-28 17:40:39
does anyone know how to make a simple EQ audio unit (3 bands - low, mid, hi) with iOS ? I know how to add an iPod EQ Audio Unit to my AU Graph. But it only give you access to presets and I need proper control of the EQ. I've looked around for some tutorials or explanations but no luck. Thanks. André The iPhone doesn't exactly support custom AudioUnits. Or, more precisely, it doesn't allow you to register an AudioUnit's identifier so you could load it in an AUGraph. You can, however, register a render callback, get raw PCM data, and process it accordingly. This is how I've implemented effect

AudioUnits causing universal skipping after returning from Springboard

巧了我就是萌 提交于 2019-11-28 10:59:15
问题 I have a problem in my applications where I am using AudioUnits. All of the applications Audio (including audio not played through AudioUnits) will start skipping after exiting to Springboard and returning to the applications. I broke out the problem into a new separate test app. Here are the steps to repeat it: Start an Audio file playing using an AVAudioPlayer. Create, Delete, then again Create an AudioUnit Exit to Springboard Return to the app The Audio from the AvAudioPlayer will start

iPhone: AudioBufferList init and release

久未见 提交于 2019-11-28 07:44:10
What are the correct ways of initializing (allocating memory) and releasing (freeing) an AudioBufferList with 3 AudioBuffers? (I'm aware that there might be more than one ways of doing this.) I'd like to use those 3 buffers to read sequential parts of an audio file into them and play them back using Audio Units. Here is how I do it: AudioBufferList * AllocateABL(UInt32 channelsPerFrame, UInt32 bytesPerFrame, bool interleaved, UInt32 capacityFrames) { AudioBufferList *bufferList = NULL; UInt32 numBuffers = interleaved ? 1 : channelsPerFrame; UInt32 channelsPerBuffer = interleaved ?

Can I use AVAudioEngine to read from a file, process with an audio unit and write to a file, faster than real-time?

倖福魔咒の 提交于 2019-11-28 06:34:08
I am working on an iOS app that uses AVAudioEngine for various things, including recording audio to a file, applying effects to that audio using audio units, and playing back the audio with the effect applied. I use a tap to also write the output to a file. When this is done it writes to the file in real time as the audio is playing back. Is it possible to set up an AVAudioEngine graph that reads from a file, processes the sound with an audio unit, and outputs to a file, but faster than real time (ie., as fast as the hardware can process it)? The use case for this would be to output a few

How to record sound produced by mixer unit output (iOS Core Audio & Audio Graph)

て烟熏妆下的殇ゞ 提交于 2019-11-28 04:12:00
I'm trying to record sound produced by a mixer unit output. For the moment, my code is based on the apple MixerHost iOS app demo : A mixer node is connected to a remote IO node on the audio graphe. And i try to set an input callback on the remote IO node input on the mixer output. I do something wrong but I can not find the error. Here is the code below. This is done just after the Multichannel Mixer unit Setup : UInt32 flag = 1; // Enable IO for playback result = AudioUnitSetProperty(iOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, // Output bus &flag, sizeof(flag)); if

Write Audio To Disk From IO Unit

半世苍凉 提交于 2019-11-27 20:16:50
Rewriting this question to be a little more succient. My problem is that I cant successfully write an audio file to disk from a remote IO Unit. The steps I took were to Open an mp3 file and extract its audio to buffers. I set up an asbd to use with my graph based on the properties of the graph. I setup and run my graph looping the extracted audio and sound successfully comes out the speaker! What I'm having difficulty with is taking the audio samples from the remote IO callback and writing them to an audio file on disk which I am using ExtAudioFileWriteASync for. The audio file does get

Apple's Voice Processing Audio Unit ( kAudioUnitSubType_VoiceProcessingIO ) broken on iOS 5.1

烈酒焚心 提交于 2019-11-27 16:47:43
问题 I'm writing a VOIP app for iPad (currently targeting 2&3). I originally wrote the audio code using the Audio Unit API, with a kAudioUnitSubtype_RemoteIO unit. This worked well, but unsurprisingly echo was a problem. I tried to use the built-in echo-suppression by switching to using a kAudioUnitSubType_VoiceProcessingIO unit. This works really well on iOS 6 (iPad 3), but the same code on iOS 5.1 (iPad 2) produces white noise on the microphone input. The documentation just mentions that it

iPhone: AudioBufferList init and release

杀马特。学长 韩版系。学妹 提交于 2019-11-27 05:45:31
问题 What are the correct ways of initializing (allocating memory) and releasing (freeing) an AudioBufferList with 3 AudioBuffers? (I'm aware that there might be more than one ways of doing this.) I'd like to use those 3 buffers to read sequential parts of an audio file into them and play them back using Audio Units. 回答1: Here is how I do it: AudioBufferList * AllocateABL(UInt32 channelsPerFrame, UInt32 bytesPerFrame, bool interleaved, UInt32 capacityFrames) { AudioBufferList *bufferList = NULL;

Can I use AVAudioEngine to read from a file, process with an audio unit and write to a file, faster than real-time?

ⅰ亾dé卋堺 提交于 2019-11-27 01:10:10
问题 I am working on an iOS app that uses AVAudioEngine for various things, including recording audio to a file, applying effects to that audio using audio units, and playing back the audio with the effect applied. I use a tap to also write the output to a file. When this is done it writes to the file in real time as the audio is playing back. Is it possible to set up an AVAudioEngine graph that reads from a file, processes the sound with an audio unit, and outputs to a file, but faster than real