core-audio

iPhone - AVAudioPlayer - convert decibel level into percent

元气小坏坏 提交于 2019-12-04 11:18:13
I like to update an existing iPhone application which is using AudioQueue for playing audio files. The levels (peakPowerForChannel, averagePowerForChannel) were linear form 0.0f to 1.0f. Now I like to use the simpler class AVAudioPlayer which works fine, the only issue is that the levels which are now in decibel, not linear from -120.0f to 0.0f. Has anyone a formula to convert it back to the linear values between 0.0f and 1.0f? Thanks Tom Several Apple examples use the following formula to convert the decibels into a linear range (from 0.0 to 1.0): double percentage = pow (10, (0.05 * power));

Starting with the Core Audio framework

时间秒杀一切 提交于 2019-12-04 10:36:08
For a project that I intend to start on soon, I will need to play back compressed and uncompressed audio files. To do that, I intend to use the Core Audio framework. However, I have no prior experience in audio programming, and I'm really not sure where to start. Are there any beginner level resources or sample projects that can demonstrate how to build a simple audio player using Core Audio? A preview of a book on Core Audio just came out. I've started reading it and as a beginner myself I find it helpful. It has a tutorial style teaching method and is very clear in its explanations. I highly

The iPhone 5 has 3 mics. Can I change from which one I'm recording?

拟墨画扇 提交于 2019-12-04 10:32:51
The iPhone 5 has 3 microphones, according to its product presentation: After looking through the website of iFixit and others I now know where the bottom microphone is and I've identified the one on the back, right next to the camera. There should be another one on the front, at the top, but I can't see it, so I assume it's behind the earpiece/receiver opening. (Is this correct?) I would like to record from two different microphones while the iPhone 5 is lying on it's back. (So the rear mic is out of the question). My question: Is there some way I can record from both mics at the same time and

How to use kAudioUnitSubType_LowShelfFilter of kAudioUnitType_Effect which controls bass in core Audio?

这一生的挚爱 提交于 2019-12-04 09:29:26
问题 i'm back with one more question related to BASS . I already had posted this question How Can we control bass of music in iPhone , but not get as much attention of your people as it should get. But now I have done some more search and had read the Core AUDIO . I got one sample code which i want to share with you people here is the link to download it iPhoneMixerEqGraphTest . Have a look on it in this code what i had seen is the developer had use preset Equalizer given by iPod in Apple. Lets

How to find an audio file's length (in seconds)

删除回忆录丶 提交于 2019-12-04 07:55:35
(Objective C) Just using simple AudioServicesPlaySystemSoundID and its counterparts, but I can't find in the documentation if there is already a way to find the length of an audio file. I know there is AudioServicesGetPropertyInfo, but that seems to return a byte-buffer - do audio files embed their length in themselves and I can just extract it with this? Or is there perhaps a formula based on bit-rate * fileSize to convert to length-of-time? mIL3S www.milkdrinkingcow.com According to a quick Google search, there is a formula: length-of-time (duration in seconds) = fileSize (in bytes) / bit

How to get notifications when the headphones are plugged in/out? Mac

做~自己de王妃 提交于 2019-12-04 07:54:29
问题 I'd like to get notified when headphones are plugged in or out in the headphone jack. I've searched around for this on stackoverflow but I can't seem to find what I'm looking for for the Mac, I can only find for iOS. So, do you have any ideas on how to perform this? What I want to do with this is: when headphones are plugged out I want to programmatically pause iTunes (iOS-like feature). Thank you! 回答1: You can observe changes using the CoreAudio framework. Both headphones and the speakers

Selecting input mic for Mac Audio Queue Services?

安稳与你 提交于 2019-12-04 07:54:28
I am currently using the Mac OS X Audio Queue Services API for audio recording and sound analysis. Works fine using the default mic input. If there is more than one microphone plugged into the Mac (USB, headset jack, etc.), is there a way to programmatically enumerate and select which mic is to be used for audio input within an application? (e.g. not have to send the user to the system preferences panel, which may affect a users other audio applications.) If so, which APIs should be used to select the mic input. sbooth To enumerate available input devices please see my answer to

is __cxa_throw safe to ignore with Core Audio?

醉酒当歌 提交于 2019-12-04 07:47:44
A similar question has been asked.. but I wanted to make it more specific to core audio.. as some of us may have noticed core audio has very little room for error. As the answer to the said question explains, __cxa_throw is a C++ unhandled exception , which can be ignored (this problem seems to be new with Xcode 4.5.1.. I've never seen it before as well) can we say the same about core audio? What makes me nervous is that it has to do with formatting of the audio.. which a lot of my code depends on: I'm trying to convert an AAC file unto lPCM.. output format: // set up the PCM output format for

How to generate audio wave form programmatically while recording Voice in iOS?

拟墨画扇 提交于 2019-12-04 07:46:52
问题 How to generate audio wave form programmatically while recording Voice in iOS? m working on voice modulation audio frequency in iOS... everything is working fine ...just need some best simple way to generate audio wave form on detection noise... Please dont refer me the code tutorials of...speakhere and auriotouch... i need some best suggestions from native app developers. I have recorded the audio and i made it play after recording . I have created waveform and attached screenshot . But it

CoreAudio AudioUnitSetProperty always fails to set Sample Rate

跟風遠走 提交于 2019-12-04 07:46:20
I need to change the output sample rate from 44.1 to 32.0, but it always throws an error, Out: AudioUnitSetProperty-SF=\217\325\377\377, -10865. I don't know why it will let me set it for input, but then not set it for output. My code is: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { OSStatus MyRenderer(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData){ NSLog(@"Running..."); ioData->mBuffers[0].mDataByteSize = 2048; ioData->mBuffers[0].mData = lbuf; ioData-