How would you connect an iPod library asset to an Audio Queue Service and process with an Audio Unit?

99封情书 提交于 2019-12-04 12:07:40

Sounds like you have a couple questions stacked in there.

When you setups an AVAssetReader you can pass in a dictionary of settings. Here is how I create my AVAssetReaders...

    AVAssetReader* CreateAssetReaderFromSong(AVURLAsset* songURL) {

    if([songURL.tracks count] <= 0)
        return NULL;


    AVAssetTrack* songTrack = [songURL.tracks objectAtIndex:0];

    NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:

                                        [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
                                        //     [NSNumber numberWithInt:AUDIO_SAMPLE_RATE],AVSampleRateKey,  /*Not Supported*/
                                        //     [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,   /*Not Supported*/

                                        [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,

                                        nil];

    NSError* error = nil;
    AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:songURL error:&error];

    {
        AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];
        [reader addOutput:output];
        [output release];
    }

    return reader;
}

So as far as splitting the Left and Right channel, you can loop over the data based on your 'AVLinearPCMBitDepthKey'.

So something like this for 16 bit...

for (j=0; j<tBufCopy; j++, pAD+=2) {            // Fill the buffers...
    mProcessingBuffer.Left[(tBlockUsed+j)] = ((sint32)pAD[0]);
    mProcessingBuffer.Right[(tBlockUsed+j)] = ((sint32)pAD[1]);
}

Now I assume you need this for your processing. But having the data in interleaved format is really quite nice. You can generally take the straight interleaved format and pass it right back to the AudioQueue or Remote I/O callback and it will play correctly.

In order to get the audio playing using the AudioQueue framework the data should follow this flow:

AVAssetReader -> NSData Buffer -> AudioQueueBuffer

Then in the AudioQueue callback where it asks for more data simply pass the AudioQueueBuffer. Something like...

- (void) audioQueueCallback:(AudioQueueRef)aq  buffer:(AudioQueueBufferRef)buffer {

    memcpy(buffer->mAudioData, srcData, mBufferByteSize);
    //Setup buffer->mAudioDataSize

    //...

    AudioQueueEnqueueBuffer(mQueue, buffer, 0 /*CBR*/, 0 /*non compressed*/);
}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!