audiounit

iOS Audio Units : When is usage of AUGraph's necessary?

亡梦爱人 提交于 2019-12-03 04:35:30
问题 I'm totally new to iOS programing (I'm more an Android guy..) and have to build an application dealing with audio DSP. (I know it's not the easiest way to approach iOS dev ;) ) The app needs to be able to accept inputs both from : 1- built-in microphone 2- iPod library Then filters may be applied to the input sound and the resulting is to be outputed to : 1- Speaker 2- Record to a file My question is the following : Is an AUGraph necessary in order to be able for example to apply multiple

Receiving kAUGraphErr_CannotDoInCurrentContext when calling AUGraphStart for playback

∥☆過路亽.° 提交于 2019-12-03 04:07:58
I'm working with AUGraph and Audio Units API to playback and record audio in my iOS app. Now I have a rare issue when an AUGraph is unable to start with the following error: result = kAUGraphErr_CannotDoInCurrentContext (-10863) The error occurred unpredictably when we try to call AUGraphStart which is set up for audio playback: (BOOL)startRendering { if (playing) { return YES; } playing = YES; if (NO == [self setupAudioForGraph:&au_play_graph playout:YES]) { print_error("Failed to create play AUGraph",0); playing = NO; return NO; } //result = kAUGraphErr_CannotDoInCurrentContext (-10863)

AudioUnit tone generator is giving me a chirp at the end of each tone generated

≡放荡痞女 提交于 2019-12-03 04:01:38
I'm creating a old school music emulator for the old GWBasic PLAY command. To that end I have a tone generator and a music player. Between each of the notes played I'm getting a chirp sound that mucking things up. Below are both of my classes: ToneGen.h #import <Foundation/Foundation.h> @interface ToneGen : NSObject @property (nonatomic) id delegate; @property (nonatomic) double frequency; @property (nonatomic) double sampleRate; @property (nonatomic) double theta; - (void)play:(float)ms; - (void)play; - (void)stop; @end ToneGen.m #import <AudioUnit/AudioUnit.h> #import "ToneGen.h" OSStatus

How do you set the input level (gain) on the built-in input (OSX Core Audio / Audio Unit)?

孤者浪人 提交于 2019-12-03 03:22:12
I've got an OSX app that records audio data using an Audio Unit. The Audio Unit's input can be set to any available source with inputs, including the built-in input. The problem is, the audio that I get from the built-in input is often clipped, whereas in a program such as Audacity (or even Quicktime) I can turn down the input level and I don't get clipping. Multiplying the sample frames by a fraction, of course, doesn't work, because I get a lower volume, but the samples themselves are still clipped at time of input. How do I set the input level or gain for that built-in input to avoid the

iOS AudioUnits pass through

≯℡__Kan透↙ 提交于 2019-12-03 01:38:08
I am trying to write an iOS application that will pass the sound received from microphone to speaker without any changes. I've read apple docs and guides. I choosed the first pattern from this guide . But nothing happening - silence. As you can see I've tried to use the AUAudioGraph (commented) - same result (do I need it in this simple example at all?). I saw few examples in the internet where callbacks are used, but I do not want use any. Is it possible? Any suggestions? Thanks for attention. The actual code #import "AudioController.h" #import <AudioToolbox/AudioToolbox.h> #import

iOS Audio Units : When is usage of AUGraph's necessary?

五迷三道 提交于 2019-12-02 17:46:11
I'm totally new to iOS programing (I'm more an Android guy..) and have to build an application dealing with audio DSP. (I know it's not the easiest way to approach iOS dev ;) ) The app needs to be able to accept inputs both from : 1- built-in microphone 2- iPod library Then filters may be applied to the input sound and the resulting is to be outputed to : 1- Speaker 2- Record to a file My question is the following : Is an AUGraph necessary in order to be able for example to apply multiple filters to the input or can these different effects be applied by processing the samples with different

iOS: Audio Units vs OpenAL vs Core Audio

早过忘川 提交于 2019-12-02 16:13:46
Could someone explain to me how OpenAL fits in with the schema of sound on the iPhone? There seem to be APIs at different levels for handling sound. The higher level ones are easy enough to understand. But my understanding gets murky towards the bottom. There is Core Audio, Audio Units, OpenAL. What is the connection between these? Is openAL the substratum, upon which rests Core Audio (which contains as one of its lower-level objects Audio Units) ? OpenAL doesn't seem to be documented by Xcode, yet I can run code that uses its functions. This is what I have figured out: The substratum is Core

CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731

不羁的心 提交于 2019-12-01 00:49:10
I am trying to capture app sound and pass it to AVAssetWriter as input. I am setting callback for audio unit to get AudioBufferList. The problem starts with converting AudioBufferList to CMSampleBufferRef. It always return error -12731 which indicates that required parameter is missing Thanks Karol -(OSStatus) recordingCallbackWithRef:(void*)inRefCon flags:(AudioUnitRenderActionFlags*)flags timeStamp:(const AudioTimeStamp*)timeStamp busNumber:(UInt32)busNumber framesNumber:(UInt32)numberOfFrames data:(AudioBufferList*)data { AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList

OSX: CoreAudio API for setting IO Buffer length?

你说的曾经没有我的故事 提交于 2019-12-01 00:08:20
This is a follow-up to a previous question: OSX CoreAudio: Getting inNumberFrames in advance - on initialization? I am trying to figure out what will be the AudioUnit API for possibly setting inNumberFrames or preffered IO buffer duration of an input callback for a single HAL audio component instance in OSX (not a plug-in!). While I understand there is a comprehensive documentation on how this can be achieved in iOS, by means of AVAudioSession API, I can neither figure out nor find documentation on setting these values in OSX, whichever API. The web is full of expert, yet conflicting

What's the reason of using Circular Buffer in iOS Audio Calling APP?

﹥>﹥吖頭↗ 提交于 2019-11-30 22:52:31
My question is pretty much self explanatory. Sorry if it seems too dumb. I am writing a iOS VoIP dialer and have checked some open-source code(iOS audio calling app). And almost all of those use Circular Buffer for storing recorded and received PCM audio data. SO i am wondering why we need to use a Circular Buffer in this case. What's the exact reason for using such audio buffer. Thanks in advance. Good question. There is another good reason for using Circular Buffer. In iOS, if you use callbacks(Audio unit) for recording and playing audio(In-fact you need to use it if you want to create a