core-audio

aurioTouch sample app's audio playback/thru not working?

吃可爱长大的小学妹 提交于 2019-12-10 10:44:14
问题 According to Apple's description, the aurioTouch sample app is suppose to "get the audio input and copy it to the output", which I think means that the app will playback/thru any sound the iPhone's mike picks up through the iPhone's speakers. When I loaded the app onto my iPhone (updated to 2.2), however, the playthru feature does not seem to work at all. The on-screen oscilliscope reponds as expected to voices and sounds, so the audio input half of the app is clearly working. Yes, I have

Controlling OS X volume in Snow Leopard

随声附和 提交于 2019-12-10 10:24:46
问题 This is a follow up to Controlling volume of running applications in Mac OS X via Objective-C, which explains how to set the volume for 10.5 or earlier. The AudioXXXXXGetProperty , and AudioXXXXXSetProperty (and related) functions are deprecated in 10.6, per Technical Note TN2223. I'm not an expert in OS X or CoreAudio programming, so I'm hoping someone has muddled through what's required in Snow Leopard and can help me (and others) out here. 回答1: Here's an example to set volume to 50%:

Audio Session Services: kAudioSessionProperty_OverrideAudioRoute with different routes for input & output

南楼画角 提交于 2019-12-09 17:21:18
问题 I'm messing around with Audio Session Services. I'm trying to control the audio routes setting AudioSessionSetProperty: kAudioSessionProperty_OverrideAudioRoute as kAudioSessionOverrideAudioRoute_Speaker . The problem is that it changes the route both for input and output. What I want is to have input set from headset's mic, and output by speakers. Any ideas? Ty! 回答1: You can do this in iOS 5 with the properties: kAudioSessionProperty_InputSource kAudioSessionProperty_OutputDestination For

Panning a mono signal with MultiChannelMixer & MTAudioProcessingTap

☆樱花仙子☆ 提交于 2019-12-09 03:45:28
I'm looking to pan a mono signal using MTAudioProcessingTap and a Multichannel Mixer audio unit, but am getting a mono output instead of a panned, stereo output. The documentation states: "The Multichannel Mixer unit (subtype kAudioUnitSubType_MultiChannelMixer) takes any number of mono or stereo streams and combines them into a single stereo output." So, the mono output was unexpected. Any way around this? I ran a stereo signal through the exact same code and everything worked great: stereo output, panned as expected. Here's the code from my tap's prepare callback: static void tap

Use Audio Unit I/O to create audio on the fly?

↘锁芯ラ 提交于 2019-12-09 00:55:03
问题 I am doing a POC in which I need to create an app which fetches Input from iPhone mic and outs the output to the Bluetooth headset/speakers. I referred the following code http://www.stefanpopp.de/2011/capture-iphone-microphone/ The code works flawlessly but it produces the output via In-Call Speakers. Can anyone suggest where I should edit the code to re-route the output to Bluetooth Speakers? 来源: https://stackoverflow.com/questions/20393249/use-audio-unit-i-o-to-create-audio-on-the-fly

How to correctly read decoded PCM samples on iOS using AVAssetReader — currently incorrect decoding

一个人想着一个人 提交于 2019-12-08 22:48:29
问题 I am currently working on an application as part of my Bachelor in Computer Science. The application will correlate data from the iPhone hardware (accelerometer, gps) and music that is being played. The project is still in its infancy, having worked on it for only 2 months. The moment that I am right now, and where I need help, is reading PCM samples from songs from the itunes library, and playing them back using and audio unit. Currently the implementation I would like working does the

iOS. Record at 96kHz with USB microphone

♀尐吖头ヾ 提交于 2019-12-08 19:21:20
问题 I am trying to record at full 96kHz with my RØDE iXY USB microphone. Recording goes without error and when I launch the app with the mic connected, I see that AVAudioSession is running successfully at 96kHz sample rate. But if I look at the spectrum it is clear that there is nothing but resample noise above 20kHz: For comparison this is a spectrum of the same recording using the app bundled with the USB mic (RØDE Rec): Is there anything else I must do to record at native 96kHz? Or maybe the

Delta calculation between MIDIPackets do not seem to be right

ⅰ亾dé卋堺 提交于 2019-12-08 15:54:32
I am trying to read a midi file, and play all the midi events with a synthesizer. The way the synth works is, it has a circular buffer which you write midi data to, and then call GenerateSamples() on it, and it will process that midi data, and give you back the number of samples you want. I'm using AudioToolbox's music player, and have setup a MidiReadProc where I write those midi packets to a buffer, and then have a separate thread polling that buffer, and writing the data to the ring buffer, and generating the appropriate number of samples. I'm taking the delta of the timestamp of the

error converting AudioBufferList to CMBlockBufferRef

て烟熏妆下的殇ゞ 提交于 2019-12-08 15:48:55
问题 I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though

Synchronising with Core Audio Thread

独自空忆成欢 提交于 2019-12-08 10:56:15
问题 I am using the render callback of the ioUnit to store the audio data into a circular buffer: OSStatus ioUnitRenderCallback( void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { OSStatus err = noErr; AMNAudioController *This = (__bridge AMNAudioController*)inRefCon; err = AudioUnitRender(This.encoderMixerNode->unit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, ioData); //