core-audio

iOS: Audio Units vs OpenAL vs Core Audio

三世轮回 提交于 2019-12-04 07:45:55
问题 Could someone explain to me how OpenAL fits in with the schema of sound on the iPhone? There seem to be APIs at different levels for handling sound. The higher level ones are easy enough to understand. But my understanding gets murky towards the bottom. There is Core Audio, Audio Units, OpenAL. What is the connection between these? Is openAL the substratum, upon which rests Core Audio (which contains as one of its lower-level objects Audio Units) ? OpenAL doesn't seem to be documented by

How to program a real-time accurate audio sequencer on the iphone?

。_饼干妹妹 提交于 2019-12-04 07:35:19
问题 I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues. In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code: init: int channelGroups[1]; channelGroups[0] = 8; soundEngine = [

Bluetooth headphone music quality deteriorates when launching iOS simulator

若如初见. 提交于 2019-12-04 07:20:02
问题 The situation goes a little something like this: I am programming Xcode whilst concurrently listening to music on my bluetooth headphones... you know to block out the world. Then, I go to launch my app in the iOS simulator and BOOM all of a sudden my crystal clear music becomes garbled and super low quality like it is playing in a bathtub 2 blocks away... in the 1940s. Note: the quality deterioration does NOT occur if I am playing music on my laptop or cinema display and I launch the sim.

iOS Audio Unit - Creating Stereo Sine Waves

此生再无相见时 提交于 2019-12-04 07:01:09
Over the weekend I hit a stumbling block learning how to program audio synthesis on iOS. I have been developing on iOS for several years, but I am just getting into the audio synthesis aspect. Right now, I am just programming demo apps to help me learn the concepts. I have currently been able to build and stack sine waves in a playback renderer for Audio Units without a problem. But, I want to understand what is going on in the renderer so I can render 2 separate sine waves in each Left and Right Channel. Currently, I assume that in my init audio section I would need to make the following

Force audio alert to loud speaker

泄露秘密 提交于 2019-12-04 06:42:24
问题 I have a small app. In this app, the loud speaker makes noise every a certain time that I set up. Now, I want it makes noise over it's built-in speaker even if a headset jack is plugged in the device. How can I do this? 回答1: you can try the below code to play code on speaker. Also check the this Hope this will help you. [[AVAudioSession sharedInstance] setDelegate:self]; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil]; [[AVAudioSession

React Native Audio Visualization

只愿长相守 提交于 2019-12-04 05:33:59
So I am using the react-native-audio package to play preloaded audio files and capture the user's recorded audio. What I would like to do is convert the audio into some sort of data for visualization and analysis. There seems to be several options for web but not much in this direction specifically for React Native. How would I achieve this? Thank you. Juanan Jimenez I've just bump with this post, I am building a React Native Waveform visualiser, still work in progres with the android side, but its working on the iOS side. Pretty much is a port from WaveForm on IOS ,using Igor Shubin's

How can I use AVAudioPlayer to play audio faster *and* higher pitched?

若如初见. 提交于 2019-12-04 04:38:55
Statement of Problem: I have a collection of sound effects in my app stored as .m4a files (AAC format, 48 KHz, 16-bit) that I want to play at a variety of speeds and pitches, without having to pre-generate all the variants as separate files. Although the .rate property of an AVAudioPlayer object can alter playback speed, it always maintains the original pitch, which is not what I want. Instead, I simply want to play the sound sample faster or slower and have the pitch go up or down to match — just like speeding up or slowing down an old-fashioned reel-to-reel tape recorder. In other words, I

OSStatus NSOSStatusErrorDomain

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-04 04:22:40
I received the following error when I get the property using AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate,&size, &myAudioDescription.mSampleRate) Error Produced by above statement is Error Domain=NSOSStatusErrorDomain Code=560557673 "The operation couldn’t be completed. (OSStatus error 560557673.)" Now, here what does 560557673 mean and where can I find its explanation? Documentation only provides NSOSStatusErrorDomain as one of the errors. That code means the property data size was not correct. OSStatus is a type commonly used for error codes in OS X and iOS. If the

difference between AudioQueue time and AudioQueue Device time

喜你入骨 提交于 2019-12-04 04:21:13
问题 I'm trying to sync music sent from a host iPhone to a client iPhone.. the audio is read using AVAssetReader and sent via packets to the client, which in turns feeds it to a ring buffer, which in turn populates the audioqueue buffers and starts playing. I was going over the AudioQueue docs and there seems to be two different concepts of a timestamp related to the audioQueue: Audio Queue Time and Audio Queue Device Time. I'm not sure how those two are related and when one should be used rather

Audio Session Services: kAudioSessionProperty_OverrideAudioRoute with different routes for input & output

﹥>﹥吖頭↗ 提交于 2019-12-04 04:06:38
I'm messing around with Audio Session Services. I'm trying to control the audio routes setting AudioSessionSetProperty: kAudioSessionProperty_OverrideAudioRoute as kAudioSessionOverrideAudioRoute_Speaker . The problem is that it changes the route both for input and output. What I want is to have input set from headset's mic, and output by speakers. Any ideas? Ty! You can do this in iOS 5 with the properties: kAudioSessionProperty_InputSource kAudioSessionProperty_OutputDestination For the possible values (what sources \ destinations are available on the device) use AudioSessionGetProperty with