core-audio

Turn non-interleaved audio data into interleaved

假装没事ソ 提交于 2019-12-14 02:29:27
问题 I'm working with modified version of Apple's MixerHost Class which separates audio into two streams in its callbacks (this is to allow for the resulting audio to be either mono or stereo). The result, clearly, is a non-interleaved stream. I'm new to Core Audio, I need an interleaved stream, and I'm wondering if anyone can point me in the right direction to modify the buffers in a callback to store interleaved stereo data. Thank you. Here is one of the callbacks: static OSStatus

iOS UI are causing a glitch in my audio stream

点点圈 提交于 2019-12-13 21:40:51
问题 I'm writing a VOIP based app for the iPhone. I'm having a strange issue where when the user presses the screen there is a glitch in the audio, this happens also when you press the volume up / down buttons on the phone itself. After days of debugging I've found its something to do with my circular buffer. I swapped mine for the one here: http://atastypixel.com/blog/a-simple-fast-circular-buffer-implementation-for-audio-processing/ this one doesn't cause a glitch but the latency is nearly 4

Is it possible to decode an MMS+WMA2 stream using audio units on the iPhone?

拜拜、爱过 提交于 2019-12-13 16:04:09
问题 I am not sure whether audio units can work as codecs in a streaming audio scenario on the iPhone. I've read in various places that it can be done, but I haven't seen any examples or proper documentation for that. Instead, I find that most of the apps released have utilised ffmpeg and libmms. I appreciate any help you can give me. 回答1: Audio Units are very low-level and are useful if you want to do some heavy audio processing like realtime audio effects. As far as I know Audio Units doesn't

How to get the current output dB values of background music to display a sound level meter in iOS?

余生颓废 提交于 2019-12-13 14:50:40
问题 Is there an easy way to read current dB sound level values of what iOS it putting out to the speakers / or from background music that is playing? 回答1: No. An app can not get the audio output levels for the sound that any other app, for instance a background music player, is currently playing. 来源: https://stackoverflow.com/questions/12661308/how-to-get-the-current-output-db-values-of-background-music-to-display-a-sound-l

How can I use AVAudioPlayer to play audio faster *and* higher pitched?

随声附和 提交于 2019-12-13 12:01:56
问题 Statement of Problem: I have a collection of sound effects in my app stored as .m4a files (AAC format, 48 KHz, 16-bit) that I want to play at a variety of speeds and pitches, without having to pre-generate all the variants as separate files. Although the .rate property of an AVAudioPlayer object can alter playback speed, it always maintains the original pitch, which is not what I want. Instead, I simply want to play the sound sample faster or slower and have the pitch go up or down to match —

CoreAudio - kAudioFileUnsupportedDataFormatError

北慕城南 提交于 2019-12-13 07:03:24
问题 I'm just getting started with CoreAudio. Just trying to create an audio file, but getting a kAudioFileUnsupportedDataFormatError with the following. Can any give me an idea why? It all looks okay to me, but I must be doing something wrong. // Prepare the format AudioStreamBasicDescription asbd; memset(&asbd, 0, sizeof(asbd)); asbd.mSampleRate = SAMPLE_RATE; // 44100 asbd.mFormatID = kAudioFormatLinearPCM; asbd.mFormatFlags = kAudioFormatFlagIsBigEndian; asbd.mBitsPerChannel = 16; asbd

Intercom with Bluetooth headset

南笙酒味 提交于 2019-12-13 05:35:59
问题 I've been researching the ability to create a bi-directional audio link between iOS and a bluetooth audio headset. What I need to be able to do is: When the user speaks into the microphone on the iOS device, that audio should be redirected to the audio out of the headset, the piece. When the microphone of the headset picks up audio, that should come out the speaker of the iOS device. In my searching I've found that you can: Enable bluetooth audio input and output: how to route iPhone audio to

initialize audiounit with kAudioFormatiLBC

£可爱£侵袭症+ 提交于 2019-12-13 05:24:15
问题 i'm trying to initialize an AudioUnit to record audio using ilbc. Unfortunatly i need to use ilbc as codec and i cannot choose a different one. after reading the documentation and forums I found that the correct stream descriptor for using ilbc should be something like: streamDesc.mSampleRate = 8000.0; streamDesc.mFormatID = kAudioFormatiLBC; streamDesc.mChannelsPerFrame = 1; then I use: AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &streamDesc); to fill the empty

Audio File: Playing data through one Speaker Only?

蹲街弑〆低调 提交于 2019-12-13 05:15:22
问题 I am working on a simple application which does speaker test. It is intended to first play on the Left Speaker then on Right one (or based on selection). As there is no way of achieving it directly, I am trying to overwrite alternate bytes. I checked out in Hex Editor, and bytes are do repeating in alternate pairs (2 bytes). When worked out with overwriting, it is still playing sound in both speakers. I am currently using 16 bit signed Little Endian Am I doing something wrong? These are the

Issue with playing music file during a phone calling iOS sdk

大城市里の小女人 提交于 2019-12-13 04:50:37
问题 I have two pieces of code: Code 1 - This is playing a music file via the in-ear speaker: NSString *tileDirectory = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"8Ocean.mp3"]; NSURL *fileURL = [[NSURL alloc] initFileURLWithPath:tileDirectory]; NSError *error=nil; self.audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL error:&error]; if(!error) { [self.audioPlayer prepareToPlay]; UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;