core-audio

Intercept global audio output in os x?

◇◆丶佛笑我妖孽 提交于 2019-12-07 10:41:22
问题 Has anyone come across a way to intercept (and modify) audio in OS X before it reaches the speakers? I realize I can build a driver and change the audio settings to output there, but what I would like to do is use the existing audio output and manipulate the stream before it reaches the chosen device, without the driver redirect trick. I'd also like to do the inverse and hook the microphone stream before it hits the rest of the pipeline. Is this even possible? 回答1: There are two kinds of

Audio Queue : AudioQueueStart returns -50

戏子无情 提交于 2019-12-07 09:48:45
问题 I'm trying to write a microphone power meter module in a GLES app (Unity3d). It works fine in UIKit application. But when I integrate into my unity3d project, the AudioQueue cannot start property.The result code of calling AudioQueueStart is always -50, but what does -50 mean? I can't find a reference in iOS Developer Library. I have searched about this problem and know someone has the same problem in a cocos2d application. Maybe this has some relevance. Here is my code for Start Audio Queue:

How can I programmatically create a multi-output device in OS X?

佐手、 提交于 2019-12-07 08:00:26
问题 How can I programmatically create a Multi-Output Device in Mac OS X? The Audio Midi Setup program provides a GUI interface for creating one, but I would like to be able to create one in code. I've found some resources already for creating aggregate devices, but multi-output devices function differently and I can't find anything on creating them. Here's what I've got so far: How to combine multiple audio interfaces by creating an aggregate device Using Aggregate Devices Creating Core Audio

Write array of floats to a wav audio file in swift

大憨熊 提交于 2019-12-07 04:37:35
问题 I have this flow now: i record audio with AudioEngine, send it to an audio processing library and get an audio buffer back, then i have a strong will to write it to a wav file but i'm totally confused how to do that in swift. I've tried this snippet from another stackoverflow answer but it writes an empty and corrupted file.( load a pcm into a AVAudioPCMBuffer ) //get data from library var len : CLong = 0 let res: UnsafePointer<Double> = getData(CLong(), &len ) let bufferPointer:

Best practice for volume control iOS?

◇◆丶佛笑我妖孽 提交于 2019-12-07 04:27:44
问题 Hardware Volume Control I'm trying to understand what is best practice for apps that are mostly silent but occasionally produce sound. Such apps can take advantage of the side volume control on iOS devices and avoid the need to design in a NSVolume control widget, which I believe is not as convenient as the hardware side volume control. The approach would apply to apps like MapQuest 4 mobile where you get occasional audio prompts that blend well with other music players (using audio ducking)

Bit-shifting audio samples from Float32 to SInt16 results in severe clipping

风流意气都作罢 提交于 2019-12-07 04:18:53
问题 I'm new to the iOS and its C underpinnings, but not to programming in general. My dilemma is this. I'm implementing an echo effect in a complex AudioUnits based application. The application needs reverb, echo, and compression, among other things. However, the echo only works right when I use a particular AudioStreamBasicDescription format for the audio samples generated in my app. This format however doesn't work with the other AudioUnits. While there are other ways to solve this problem

Save audio stream to mp3 file (iOS)

人走茶凉 提交于 2019-12-07 04:12:57
问题 I have an AVSpeechSynthesizer which converts text to speech, but i've encountered a problem. I don't know how to save the audio file that it generates to a music file, which I would quite like to be able to do! So here's my question, how do you save the AVSpeechSynthesizer output and if this isn't possible, can I us AVFoundation, CoreMedia or other public API to capture the output of the speakers, but before it has come out? Thanks! 回答1: Unfortunately no, there is no public API available to

Getting notified when a sound is done playing in OpenAL

蹲街弑〆低调 提交于 2019-12-07 03:33:47
问题 I'm using OpenAL on iPhone to play multiple audio samples simultaneously. Can I get OpenAL to notify me when a single sample is done playing? I'd like to avoid hardcoding the sample length and setting a timer. 回答1: If you have the OpenAL source abstracted into a class, I guess you can simply call performSelector:afterDelay: when you start the sound: - (void) play { [delegate performSelector:@selector(soundHasFinishedPlaying) afterDelay:self.length]; … } (If you stop the sound manually in the

precise timing with AVMutableComposition

你说的曾经没有我的故事 提交于 2019-12-07 03:01:18
问题 I'm trying to use AVMutableComposition to play a sequence of sound files at precise times. When the view loads, I create the composition with the intent of playing 4 sounds evenly spaced over 1 second. It shouldn't matter how long or short the sounds are, I just want to fire them at exactly 0, 0.25, 0.5 and 0.75 seconds: AVMutableComposition *composition = [[AVMutableComposition alloc] init]; NSDictionary *options = @{AVURLAssetPreferPreciseDurationAndTimingKey : @YES}; for (NSInteger i = 0;

How can I obtain the native (hardware-supported) audio sampling rates in order to avoid internal sample rate conversion?

丶灬走出姿态 提交于 2019-12-07 01:24:13
问题 Can anybody point me to documentation stating the native sampling rates on the different iPhone versions in order to avoid core-audio internal sampling rate conversion? Edit: Otherwise, can you please point me to a source code example of how can I get those values programmatically? Edit: This Apple document (page 26) refers to a Canonical audio format, but only makes mention of sample type (PCM) and bit depth (16-bit). It doesn't mention any native sampling rates supported directly by the