core-audio

How to play audio backwards?

流过昼夜 提交于 2019-12-05 18:22:17
Some people suggested to read the audio data from end to start and create a copy written from start to end, and then simply play that reversed audio data. Are there existing examples for iOS how this is done? I found an example project called MixerHost, which at some point uses an AudioUnitSampleType holding the audio data that has been read from file, and assigning it to a buffer. This is defined as: typedef SInt32 AudioUnitSampleType; #define kAudioUnitSampleFractionBits 24 And according to Apple: The canonical audio sample type for audio units and other audio processing in iPhone OS is

Audio Queue : AudioQueueStart returns -50

寵の児 提交于 2019-12-05 17:16:25
I'm trying to write a microphone power meter module in a GLES app (Unity3d). It works fine in UIKit application. But when I integrate into my unity3d project, the AudioQueue cannot start property.The result code of calling AudioQueueStart is always -50, but what does -50 mean? I can't find a reference in iOS Developer Library. I have searched about this problem and know someone has the same problem in a cocos2d application. Maybe this has some relevance. Here is my code for Start Audio Queue: UInt32 ioDataSize = sizeof(sampleRate); AudioSessionGetProperty(kAudioSessionProperty

Intercept global audio output in os x?

一世执手 提交于 2019-12-05 16:11:39
Has anyone come across a way to intercept (and modify) audio in OS X before it reaches the speakers? I realize I can build a driver and change the audio settings to output there, but what I would like to do is use the existing audio output and manipulate the stream before it reaches the chosen device, without the driver redirect trick. I'd also like to do the inverse and hook the microphone stream before it hits the rest of the pipeline. Is this even possible? There are two kinds of CoreAudio "drivers", kernel-level and user-space. From your question it isn't clear whether you want to avoid

How do I create an AUAudioUnit that implements multiple audio units?

这一生的挚爱 提交于 2019-12-05 15:35:54
In Apple's docs for creating an AUAudio Unit (Here: https://developer.apple.com/documentation/audiotoolbox/auaudiounit/1387570-initwithcomponentdescription ) they claim that A single audio unit subclass may implement multiple audio units—for example, an effect that can also function as a generator, or a cluster of related effects. There are no examples of this online that I can find. Ideally it would be nice if your answer/solution involved using Swift and AVAudioEngine but I'd happily accept any answer that gets me moving in the right direction. Thanks in advance. I posted some source code to

Swift vs Objective C pointer manipulation issue

送分小仙女□ 提交于 2019-12-05 11:26:52
I have this code in Objective C which works fine: list = controller->audioBufferList; list->mBuffers[0].mDataByteSize = inNumberFrames*kSampleWordSize; list->mBuffers[1].mDataByteSize = inNumberFrames*kSampleWordSize; And it works fantastic, it updates mDataByteSize field of mBuffers[0] & mBuffers[1]. I tried translating the same in Swift but it doesn't work: public var audioBufferList:UnsafeMutableAudioBufferListPointer In function, let listPtr = controller.audioBufferList.unsafeMutablePointer let buffers = UnsafeBufferPointer<AudioBuffer>(start: &listPtr.pointee.mBuffers, count: Int(listPtr

Setting a time limit when recording an audio clip?

血红的双手。 提交于 2019-12-05 10:37:24
I looked for search terms along the lines of the post title, but alas.. I am building an iPhone app using AVFoundation. Is there a correct procedure to limit the amount of audio that will be recorded? I would like a maximum of 10 seconds. Thanks for any help/advice/tips/pointers.. AVAudioRecorder has the following method: - (BOOL)recordForDuration:(NSTimeInterval)duration I think that will do the trick! http://developer.apple.com/library/ios/#documentation/AVFoundation/Reference/AVAudioRecorder_ClassReference/Reference/Reference.html#//apple_ref/doc/uid/TP40008238 I don't normally work with

Bit-shifting audio samples from Float32 to SInt16 results in severe clipping

匆匆过客 提交于 2019-12-05 10:03:56
I'm new to the iOS and its C underpinnings, but not to programming in general. My dilemma is this. I'm implementing an echo effect in a complex AudioUnits based application. The application needs reverb, echo, and compression, among other things. However, the echo only works right when I use a particular AudioStreamBasicDescription format for the audio samples generated in my app. This format however doesn't work with the other AudioUnits. While there are other ways to solve this problem fixing the bit-twiddling in the echo algorithm might be the most straight forward approach. The*

Write array of floats to a wav audio file in swift

做~自己de王妃 提交于 2019-12-05 09:36:38
I have this flow now: i record audio with AudioEngine, send it to an audio processing library and get an audio buffer back, then i have a strong will to write it to a wav file but i'm totally confused how to do that in swift. I've tried this snippet from another stackoverflow answer but it writes an empty and corrupted file.( load a pcm into a AVAudioPCMBuffer ) //get data from library var len : CLong = 0 let res: UnsafePointer<Double> = getData(CLong(), &len ) let bufferPointer: UnsafeBufferPointer = UnsafeBufferPointer(start: res, count: len) //tranform it to Data let arrayDouble = Array

Mixing down two files together using Extended Audio File Services

╄→尐↘猪︶ㄣ 提交于 2019-12-05 09:20:19
问题 I am doing some custom audio post-processing using audio units. I have two files that I am merging together (links below), but am coming up with some weird noise in the output. What am I doing wrong? I have verified that before this step, the 2 files ( workTrack1 and workTrack2 ) are in a proper state and sound good. No errors are hit in the process as well. Buffer Processing code : - (BOOL)mixBuffersWithBuffer1:(const int16_t *)buffer1 buffer2:(const int16_t *)buffer2 outBuffer:(int16_t *

kAudioDevicePropertyBufferFrameSize replacement for iOS

久未见 提交于 2019-12-05 07:13:58
问题 I was trying to set up an audio unit to render the music (instead of Audio Queue.. which was too opaque for my purposes).. iOS doesn't have this property kAudioDevicePropertyBufferFrameSize .. any idea how I can derive this value to set up the buffer size of my IO unit? I found this post interesting.. it asks about the possibility of using a combination of kAudioSessionProperty_CurrentHardwareIOBufferDuration and kAudioSessionProperty_CurrentHardwareOutputLatency audio session properties to