I want to call 20 times per second the installTapOnBus:bufferSize:format:block:

偶尔善良 提交于 2019-11-29 02:56:42

问题


I want to waveform display in real-time input from the microphone. I have been implemented using the installTapOnBus:bufferSize:format:block:, This function is called three times in one second. I want to set this function to be called 20 times per second. Where can I set?

AVAudioSession *audioSession = [AVAudioSession sharedInstance];

NSError* error = nil;
if (audioSession.isInputAvailable) [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
if(error){
    return;
}

[audioSession setActive:YES error:&error];
if(error){
    retur;
}

self.engine = [[[AVAudioEngine alloc] init] autorelease];

AVAudioMixerNode* mixer = [self.engine mainMixerNode];
AVAudioInputNode* input = [self.engine inputNode];
[self.engine connect:input to:mixer format:[input inputFormatForBus:0]];

// tap ... 1 call in 16537Frames
// It does not change even if you change the bufferSize
[input installTapOnBus:0 bufferSize:4096 format:[input inputFormatForBus:0] block:^(AVAudioPCMBuffer* buffer, AVAudioTime* when) {

    for (UInt32 i = 0; i < buffer.audioBufferList->mNumberBuffers; i++) {
        Float32 *data = buffer.audioBufferList->mBuffers[i].mData;
        UInt32 frames = buffer.audioBufferList->mBuffers[i].mDataByteSize / sizeof(Float32);

        // create waveform
        ...
    }
}];

[self.engine startAndReturnError:&error];
if (error) {
    return;
}

回答1:


they say, Apple Support replied no: (on sep 2014)

Yes, currently internally we have a fixed tap buffer size (0.375s), and the client specified buffer size for the tap is not taking effect.

but someone resizes buffer size and gets 40ms https://devforums.apple.com/thread/249510?tstart=0

Can not check it, neen in ObjC :(

UPD it works! just single line:

    [input installTapOnBus:0 bufferSize:1024 format:[mixer outputFormatForBus:0] block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when) {
    buffer.frameLength = 1024; //here



回答2:


The AVAudioNode class reference states that the implementation may choose a buffer size other than the one that you supply, so as far as I know, we are stuck with the very large buffer size. This is unfortunate, because AVAudioEngine is otherwise an excellent Core Audio wrapper. Since I too need to use the input tap for something other than recording, I'm looking into The Amazing Audio Engine, as well as the Core Audio C API (see the iBook Learning Core Audio for excellent tutorials on it), as alternatives.

***Update: It turns out that you can access the AudioUnit of the AVAudioInputNode and install a render callback on it. Via AVAudioSession, you can set your audio session's desired buffer size (not guaranteed, but certainly better than node taps). Thus far, I've gotten buffer sizes as low as 64 samples using this approach. I'll post back here with code once I've had a chance to test this.




回答3:


Don't know why or even if this works yet, just trying a few things out. But for sure the NSLogs indicate a 21 ms interval, 1024 samples coming in per buffer...

        AVAudioEngine* sEngine = NULL;
        - (void)applicationDidBecomeActive:(UIApplication *)application 
        {
            /*
             Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface.
             */

            [glView startAnimation];

            AVAudioSession *audioSession = [AVAudioSession sharedInstance];

            NSError* error = nil;
            if (audioSession.isInputAvailable) [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
            if(error){
                return;
            }

            [audioSession setActive:YES error:&error];
            if(error){
                return;
            }

            sEngine = [[AVAudioEngine alloc] init];

            AVAudioMixerNode* mixer = [sEngine mainMixerNode];
            AVAudioInputNode* input = [sEngine inputNode];
            [sEngine connect:input to:mixer format:[input inputFormatForBus:0]];

            __block NSTimeInterval start = 0.0;

            // tap ... 1 call in 16537Frames
            // It does not change even if you change the bufferSize
            [input installTapOnBus:0 bufferSize:1024 format:[input inputFormatForBus:0] block:^(AVAudioPCMBuffer* buffer, AVAudioTime* when) {

                if (start == 0.0)
                    start = [AVAudioTime secondsForHostTime:[when hostTime]];

                // why does this work? because perhaps the smaller buffer is reused by the audioengine, with the code to dump new data into the block just using the block size as set here?
                // I am not sure that this is supported by apple?
                NSLog(@"buffer frame length %d", (int)buffer.frameLength);
                buffer.frameLength = 1024;
                UInt32 frames = 0;
                for (UInt32 i = 0; i < buffer.audioBufferList->mNumberBuffers; i++) {
                    Float32 *data = buffer.audioBufferList->mBuffers[i].mData;
                    frames = buffer.audioBufferList->mBuffers[i].mDataByteSize / sizeof(Float32);
                    // create waveform
                    ///
                }
                NSLog(@"%d frames are sent at %lf", (int) frames, [AVAudioTime secondsForHostTime:[when hostTime]] - start);
            }];

            [sEngine startAndReturnError:&error];
            if (error) {
                return;
            }

        }



回答4:


As of iOS 13 in 2019, there is AVAudioSinkNode, which may better accomplish what you are looking for. While you could have also created a regular AVAudioUnit / Node and attached it to the input/output, the difference with an AVAudioSinkNode is that there is no output required. That makes it more like a tap and circumvents issues with incomplete chains that might occur when using a regular Audio Unit / Node.

For more information:

  • https://developer.apple.com/videos/play/wwdc2019/510/
  • https://devstreaming-cdn.apple.com/videos/wwdc/2019/510v8txdlekug3npw2m/510/510_whats_new_in_avaudioengine.pdf?dl=1
  • https://developer.apple.com/documentation/avfoundation/avaudiosinknode?language=objc

The relevant Swift code is on page 10 (with a small error corrected below) of the session's PDF.

// Create Engine
let engine = AVAudioEngine()
// Create and Attach AVAudioSinkNode
let sinkNode = AVAudioSinkNode() { (timeStamp, frames, audioBufferList) ->
OSStatus in
 …
}
engine.attach(sinkNode) 

I imagine that you'll still have to follow the typical real-time audio rules when using this (e.g. no allocating/freeing memory, no ObjC calls, no locking or waiting on locks, etc.). A ring buffer may still be helpful here.




回答5:


You might be able to use a CADisplayLink to achieve this. A CADisplayLink will give you a callback each time the screen refreshes, which typically will be much more than 20 times per second (so additional logic may be required to throttle or cap the number of times your method is executed in your case).

This is obviously a solution that is quite discrete from your audio work, and to the extent you require a solution that reflects your session, it might not work. But when we need frequent recurring callbacks on iOS, this is often the approach of choice, so it's an idea.



来源:https://stackoverflow.com/questions/26115626/i-want-to-call-20-times-per-second-the-installtaponbusbuffersizeformatblock

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!