core-audio

Core Audio (iOS 5.1) Reverb2 properties do not exist, error code -10877

狂风中的少年 提交于 2019-12-05 06:48:12
I am playing with Apple's sample project "LoadPresetDemo". I have added the reverb audio unit AudioUnit kAudioUnitSubType_Reverb2 to the graph, which is the only iOS reverb available. In the CoreAudio header file "AudioUnitParameters.h", it states that Reverb2 should respond to these parameters: enum { // Global, CrossFade, 0->100, 100 kReverb2Param_DryWetMix = 0, // Global, Decibels, -20->20, 0 kReverb2Param_Gain = 1, // Global, Secs, 0.0001->1.0, 0.008 kReverb2Param_MinDelayTime = 2, // Global, Secs, 0.0001->1.0, 0.050 kReverb2Param_MaxDelayTime = 3, // Global, Secs, 0.001->20.0, 1.0

precise timing with AVMutableComposition

泄露秘密 提交于 2019-12-05 06:13:47
I'm trying to use AVMutableComposition to play a sequence of sound files at precise times. When the view loads, I create the composition with the intent of playing 4 sounds evenly spaced over 1 second. It shouldn't matter how long or short the sounds are, I just want to fire them at exactly 0, 0.25, 0.5 and 0.75 seconds: AVMutableComposition *composition = [[AVMutableComposition alloc] init]; NSDictionary *options = @{AVURLAssetPreferPreciseDurationAndTimingKey : @YES}; for (NSInteger i = 0; i < 4; i++) { AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType

IOS Swift read PCM Buffer

喜你入骨 提交于 2019-12-05 06:00:16
问题 I have a project for Android reading a short[] array with PCM data from microphone Buffer for live analysis. I need to convert this functionality to iOS Swift. In Android it is very simple and looks like this.. import android.media.AudioFormat; import android.media.AudioRecord; ... AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, someSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, AudioRecord.getMinBufferSize(...)); recorder.startRecording();

How to get notification if System Preferences Default Sound changed

断了今生、忘了曾经 提交于 2019-12-05 03:41:53
问题 This is fairly simple (I think). I'm simply wanting to get a notification in my application whenever a user changes the default sound input or sound output device in System Preferences - Sound. I'll be darned if I'm able to dig it out of the Apple docs, however. As a side note, this is for OSX, not IOS. Thanks guys! 回答1: Set up an AudioObjectPropertyAddress for the default output device: AudioObjectPropertyAddress outputDeviceAddress = { kAudioHardwarePropertyDefaultOutputDevice,

record live streaming audio

断了今生、忘了曾经 提交于 2019-12-05 01:41:54
问题 I'm actually making an app which has to play and record streaming audio from internet on ipad. The streaming of the audio is done, I will have to come to the recording part very soon and I don't have any idea on how to proceed. Could you give me a hint??? Idea? It will have to play while simultaneously recording into AAC or MP3. Thanks. 回答1: You'll need to use the lower-level AudioQueue API, and use the AudioSession API to set up the audio session. Then you'll need to fill out an

What is a correct way to manage AudioKit's lifecycle?

青春壹個敷衍的年華 提交于 2019-12-05 01:35:47
问题 I'm building an app that has to track input amplitude of users mic. AudioKit has a bunch of handy objects for my needs: AKAmplitudeTracker and so. I haven't found any viable info on how is it supposed to start AudioKit, begin tracking etc. For now all code related to AudioKit initialization is in viewDidLoad method of my root VC of audio recorder module. It is not correct, because random errors occur and I can't track whats wrong. Code below shows how I use AudioKit now. var silence:

Granular Synthesis in iOS 6 using AudioFileServices

只谈情不闲聊 提交于 2019-12-05 00:50:39
问题 I have a question regarding a sound synthesis app that I'm working on. I am trying to read in an audio file, create randomized 'grains' using granular synthesis techniques, place them into an output buffer and then be able to play that back to the user using OpenAL. For testing purposes, I am simply writing the output buffer to a file that I can then listen back to. Judging by my results, I am on the right track but am getting some aliasing issues and playback sounds that just don't seem

iPhone does not vibrate while recording

徘徊边缘 提交于 2019-12-04 23:02:50
问题 I am modifying the AurioTouch example. I want to vibrate the phone in response to particular sound inputs. I can detect the inputs and printf them, but AudioServicesPlaySystemSound(kSystemSoundID_Vibrate) doesn't do anything while the session is kAudioSessionCategory_PlayAndRecord . 回答1: The answer is that Apple doesn't allow this. All audio sessions that allow recording turn off vibration. 回答2: Do you need to vibrate and record at the same time? If you don't, you can stop your audio unit

Decode MP3 File from NSData

僤鯓⒐⒋嵵緔 提交于 2019-12-04 23:00:01
问题 For my application, I need to decode an MP3 file which is stored in an NSData object. For security reasons, it is undesirable to write the NSData object to disk and re-open it using a System URL reference, even if its only locally stored for a few moments. I would like to take advantage Extended Audio File Services (or Audio File Services) to do this, but I'm having trouble getting a representation of the NSData, which exists only in memory, that can be read by these Audio File Services. Edit

Audio Session “Ducking” Broken in iOS 4…?

怎甘沉沦 提交于 2019-12-04 22:03:00
问题 I've an app which uses the MPAudioPlayerController to access the iPod music library, and an AVAudioPlayer to overlay audio on top of the music. I've used this documentation as a guide. Specifically: Finally, you can enhance a category to automatically lower the volume of other audio when your audio is playing. This could be used, for example, in an exercise application. Say the user is exercising along to their iPod when your application wants to overlay a verbal message—for instance, “You’ve