core-audio

how to read header-chunks from a CAF-file using core-audio/audiotoolbox

对着背影说爱祢 提交于 2019-12-13 04:28:10
问题 i'm trying to read a CAF-file on OSX, using AudioToolbox's Extended Audio File API. opening the file works fine, however, i need to access the UUID chunk, and i cannot find any reference on how to do that (or how to access any header-chunk of the file) surely there must be a way to do this without parsing the file on my own. PS: i can already do this with libsndfile, but i want to find a way to do this with only components that come with OSX. i already tried calling ExtAudioFileGetProperty()

Experiencing audio dropouts with OS X core audio playback/output

我们两清 提交于 2019-12-13 02:35:01
问题 I'm doing playback using core audio (OS X, 10.11.4 Beta, older mac mini) using a simple output audio unit configured for input and output (though all of my problems seem to be with output). This is a streaming audio source from socket/internet feeding into a boost lockless queue, which then feeds into the output AU. I'm getting audio dropouts that appear to be a result of the AU render callback not being called by core audio intermittently. Here is a graph. There were ~10 seconds of flawless

How can I implement a volume meter for a song currently playing? (iPhone OS 3.1.3)

倾然丶 夕夏残阳落幕 提交于 2019-12-13 02:34:35
问题 I'm very new to core audio and I just would like some help in coding up a little volume meter for whatever's being outputted through headphones or built-in speaker, like a dB meter. I have the following code, and have been trying to go through the apple source project "SpeakHere", but it's a nightmare trying to go through all that, without knowing how it works first... Could anyone shed some light? Here's the code I have so far... (void)displayWaveForm { while (musicIsPlaying == YES { NSLog(@

Core Audio Swift Equalizer adjusts all bands at once?

冷暖自知 提交于 2019-12-13 02:09:05
问题 I am having trouble setting up a kAudioUnitSubType_NBandEQ in Swift. Here is my code to initialize the EQ: var cd:AudioComponentDescription = AudioComponentDescription(componentType: OSType(kAudioUnitType_Effect),componentSubType: OSType(kAudioUnitSubType_NBandEQ),componentManufacturer: OSType(kAudioUnitManufacturer_Apple),componentFlags: 0,componentFlagsMask: 0) // Add the node to the graph status = AUGraphAddNode(graph, &cd, &MyAppNode) println(status) // Once the graph has been opened get

Speech recogition and intonation detection

好久不见. 提交于 2019-12-13 00:57:35
问题 I want to make an iOS app to count interrogative sentences. I will look for WH questions and also "will I, am I?" format questions. I am not very get in the speech or audio technology world, but I did Google and found that there are few speech recognition SDKs. But still no idea how can I detect and graph intonation. Are there any SDKs that support intonation or emotional speech recognition? 来源: https://stackoverflow.com/questions/15527107/speech-recogition-and-intonation-detection

Generate sine wave to play middle C using PortAudio

落爺英雄遲暮 提交于 2019-12-13 00:53:30
问题 I am having trouble generating specific frequencies in PortAudio, whenever I try and change the frequency inside of the sin(n * FREQ * 2 * PI / SAMPLE_RATE) the frequency remains the same however the sound does seem to change in timbre, the higher the frequency value I put in there the uglier the sound, yet the same frequency. This is what I have in my patestCallback loop: static int patestCallback( const void *inputBuffer, void *outputBuffer, unsigned long framesPerBuffer, const

Sound plays in iphone simulator with breakpoint, but fails to play without breakpoint

走远了吗. 提交于 2019-12-12 19:16:10
问题 I am trying to get a sound to play in the iPhone Simulator (using 3.1 SDK). When I add a breakpoint, and step through the code in GDB, the sound plays. However, if I disable the breakpoint, the sound does not play. The code to invoke the sound is: SoundEffect *completeSound = [myobj completeSound]; if (completeSound != nil) { [completeSound play]; } The SoundEffect class has a simple play method: // Plays the sound associated with a sound effect object. -(void)play { // Calls Core Audio to

Playback through a bluetooth connected speaker

浪子不回头ぞ 提交于 2019-12-12 18:23:35
问题 In my app I am using the play and record category aka: UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord; CheckError( AudioSessionSetProperty (kAudioSessionProperty_AudioCategory, sizeof (sessionCategory), &sessionCategory), "Couldn't set audio category"); In the app any audio that plays would initially output through the receiver until I set this: UInt32 doChangeDefaultRoute = 1; AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof

Changing pitch in an iOS audio player—like Alvin and the Chipmunks

你离开我真会死。 提交于 2019-12-12 18:23:05
问题 I found this code fragment in StackOverflow. I think this is what I want to use. But I cannot get it to change the pitch—as I expected it to. I figured changing the 44100.0 up/down would affect the pitch. But I'm getting no change, regardless of the setValue. NSMutableDictionary *settings = [[NSMutableArray alloc] init]; [settings setValue: [NSNumber numberWithFloat:44100.0] forKey:AVSampleRate]; Clearly I'm missing something. Any additional fragments available to give this some more context?

iOS7 robotic/garbled in speaker mode on iPhone5s

限于喜欢 提交于 2019-12-12 15:26:07
问题 We have a VOIP application, that records and plays audio. As such, we are using PlayAndRecord (kAudioSessionCategory_PlayAndRecord) audio session category. So far, we have used it successfully with iPhone 4/4s/5 with both iOS 6 and iOS 7 where call audio and tones played clearly and were audible. However, with iPhone 5s, we observed that both the call audio and tones sound robotic/garbled in speaker mode. When using earpiece/bluetooth/headset, sound is clear and audible. iOS Version used with