core-audio

use rear microphone of iphone 5

帅比萌擦擦* 提交于 2019-11-30 19:05:19
I have used to following code the stream the i/o of audio from microphone. What I want to do is want to select the rear microphone for recording. I have read that setting kAudioSessionProperty_Mode to kAudioSessionMode_VideoRecording can do the work but I am not sure how to use this with my code. Can any one help me in successfully setting this parameter. I have these lines for setting the property status = AudioUnitSetProperty(audioUnit, kAudioSessionProperty_Mode, kAudioSessionMode_VideoRecording, kOutputBus, &audioFormat, sizeof(audioFormat)); checkStatus(status); but its not working. in

CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731

烂漫一生 提交于 2019-11-30 18:57:05
问题 I am trying to capture app sound and pass it to AVAssetWriter as input. I am setting callback for audio unit to get AudioBufferList. The problem starts with converting AudioBufferList to CMSampleBufferRef. It always return error -12731 which indicates that required parameter is missing Thanks Karol -(OSStatus) recordingCallbackWithRef:(void*)inRefCon flags:(AudioUnitRenderActionFlags*)flags timeStamp:(const AudioTimeStamp*)timeStamp busNumber:(UInt32)busNumber framesNumber:(UInt32

iOS AUSampler audiounit - file path issue with EXS audio files?

被刻印的时光 ゝ 提交于 2019-11-30 18:35:27
问题 Following the Apple docs here I have been able to successfully load a GarageBand EXS sampler instrument into AUSampler in my iOS app by recreating, for example the following path within my app directory: /Sampler Files/Funk Horn Section/nameofaudio.aif iOS looks for the audio file in the following directory: file:///Library/Application%20Support/GarageBand/Instrument%20Library/Sampler/Sampler%20Files/Funk%20Horn%20Section/' However this doesn't work when I create my own EXS file. How does it

OSX: CoreAudio API for setting IO Buffer length?

烂漫一生 提交于 2019-11-30 18:32:27
问题 This is a follow-up to a previous question: OSX CoreAudio: Getting inNumberFrames in advance - on initialization? I am trying to figure out what will be the AudioUnit API for possibly setting inNumberFrames or preffered IO buffer duration of an input callback for a single HAL audio component instance in OSX (not a plug-in!). While I understand there is a comprehensive documentation on how this can be achieved in iOS, by means of AVAudioSession API, I can neither figure out nor find

How do I get my sound to play when a remote notification is received?

与世无争的帅哥 提交于 2019-11-30 17:29:31
问题 I'm trying to automatically play a sound file (that is not part of my app bundle and is not a notification sound) upon receiving a remote notification. I want this to happen whether the app is in the foreground or background when the notification is received. I'm using the Amazing Audio Engine as a wrapper around the core audio libraries. In my App Delegate's didReceiveRemoteNotification I create an Audio Controller and add AEAudioFilePlayer to it like so: NSURL *file = [NSURL fileURLWithPath

reading samples with AVAssetReader and timeRange in real time

狂风中的少年 提交于 2019-11-30 16:15:56
问题 Previously I read audio samples from a complete audio file using CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer . Right now I would like to do the same using ranges (ie i specify the range in time.. read a small chunk of audio as per the time, and then go back and read again). The reason why I want to use time range is b/c I want to control the size of each read (to fit in a packet with a max size). for some reason, there is always a bump between each read. In my code you'll notice

reading samples with AVAssetReader and timeRange in real time

徘徊边缘 提交于 2019-11-30 15:56:07
Previously I read audio samples from a complete audio file using CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer . Right now I would like to do the same using ranges (ie i specify the range in time.. read a small chunk of audio as per the time, and then go back and read again). The reason why I want to use time range is b/c I want to control the size of each read (to fit in a packet with a max size). for some reason, there is always a bump between each read. In my code you'll notice that I start the AVAssetReader and end it every time I set a time range, and that's b/c I cannot

iOS - Playback of recorded audio fails with OSStatus error -43 (file not found)

泄露秘密 提交于 2019-11-30 15:14:40
I set up an AVAudioRecorder instance the following way when my view loads: AVAudioSession *audioSession = [AVAudioSession sharedInstance]; audioSession.delegate = self; [audioSession setActive:YES error:nil]; [audioSession setCategory:AVAudioSessionCategoryRecord error:nil]; NSString *tempDir = NSTemporaryDirectory(); NSString *soundFilePath = [tempDir stringByAppendingPathComponent:@"sound.m4a"]; NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath]; NSLog(@"%@", soundFileURL); NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt

IIR coefficients for peaking EQ, how to pass them to vDSP_deq22?

牧云@^-^@ 提交于 2019-11-30 14:15:56
I have these 6 coefficients for peaking EQ: b0 = 1 + (α ⋅ A) b1 = −2⋅ωC b2 = 1 - (α ⋅ A) a0 = 1 + (α / A) a1 = −2 ⋅ ωC a2 = 1 − (α / A) With these intermediate variables: ωc = 2 ⋅ π ⋅ fc / fs ωS = sin(ωc) ωC = cos(ωc) A = sqrt(10^(G/20)) α = ωS / (2Q) The documentation of vDSP_deq22() states that "5 single-precision inputs, filter coefficients" should be passed but I have 6 coefficients! Also, in what order do I pass them to vDSP_deq22() ? Update (17/05): I recommend everyone to use my DSP class I released on github: https://github.com/bartolsthoorn/NVDSP It'll probably save you quite some

How can I specify the format of AVAudioEngine Mic-Input?

橙三吉。 提交于 2019-11-30 13:10:43
I'd like to record the some audio using AVAudioEngine and the users Microphone. I already have a working sample, but just can't figure out how to specify the format of the output that I want... My requirement would be that I need the AVAudioPCMBuffer as I speak which it currently does... Would I need to add a seperate node that does some transcoding? I can't find much documentation/samples on that problem... And I am also a noob when it comes to Audio-Stuff. I know that I want NSData containing PCM-16bit with a max sample-rate of 16000 (8000 would be better) Here's my working sample: private