AudioQueue how to find out playback length of queued data

后端 未结 2 781
野趣味
野趣味 2020-12-20 08:07

I am using AudioQueue to stream some song, my question is how can i tell the length of playback of already queued buffers? I want to stream two seconds of data at a time, th

相关标签:
2条回答
  • 2020-12-20 08:44

    If the songs are in arbitrary compressed formats, and you want exactly 2 second snips, you may have to convert the songs into raw PCM samples or WAV data first (AVAssetReader, et. al.). Then you can count samples at a known sample rate. e.g. 88200 frames at a 44.1k sample rate would be 2 seconds worth.

    0 讨论(0)
  • 2020-12-20 08:54

    Here is a class that uses Audio File Services to get at bitrate / packet / frame data to grab the amount of bytes from a music file that correspond to x seconds, the example has been tested with mp3 and m4a files

    Header

    #import <Foundation/Foundation.h>
    #import <AudioToolbox/AudioToolbox.h>
    @interface MusicChunker : NSObject
    {
        AudioFileID audioFile;
        int _sampleRate;
        int _totalFrames;
        UInt64 _framesPerPacket;
        UInt64 _totalPackets;
        UInt64 fileDataSize;
        AudioFilePacketTableInfo _packetInfo;
        int _fileLength;
        AudioStreamBasicDescription _fileDataFormat;
        NSFileHandle * _fileHandle;
        int _packetOffset;
        int _totalReadBytes;
        int _maxPacketSize;
        BOOL firstTime;
        BOOL _ism4a;
    }
    -(id)initWithURL:(NSURL*)url andFileType:(NSString*)ext;
    //gets next chunk that corresponds to seconds of audio
    -(NSData*)getNextDataChunk:(int)seconds;
    @end
    

    Implementation

    #import "MusicChunker.h"
    void ReportAudioError(OSStatus statusCode);
    @implementation MusicChunker
    
    - (id)init
    {
        self = [super init];
        if (self) {
            // Initialization code here.
        }
    
        return self;
    }
    void ReportAudioError(OSStatus statusCode) {
        switch (statusCode) {
            case noErr:
                break;
            case kAudioFileUnspecifiedError:
                [NSException raise:@"AudioFileUnspecifiedError" format:@"An unspecified error occured."];
                break;
            case kAudioFileUnsupportedDataFormatError:
                [NSException raise:@"AudioFileUnsupportedDataFormatError" format:@"The data format is not supported by the output file type."];
                break;
            case kAudioFileUnsupportedFileTypeError:
                [NSException raise:@"AudioFileUnsupportedFileTypeError" format:@"The file type is not supported."];
                break;
            case kAudioFileUnsupportedPropertyError:
                [NSException raise:@"AudioFileUnsupportedPropertyError" format:@"A file property is not supported."];
                break;
            case kAudioFilePermissionsError:
                [NSException raise:@"AudioFilePermissionsError" format:@"The operation violated the file permissions. For example, an attempt was made to write to a file opened with the kAudioFileReadPermission constant."];
                break;
            case kAudioFileNotOptimizedError:
                [NSException raise:@"AudioFileNotOptimizedError" format:@"The chunks following the audio data chunk are preventing the extension of the audio data chunk. To write more data, you must optimize the file."];
                break;
            case kAudioFileInvalidChunkError:
                [NSException raise:@"AudioFileInvalidChunkError" format:@"Either the chunk does not exist in the file or it is not supported by the file."];
                break;
            case kAudioFileDoesNotAllow64BitDataSizeError:
                [NSException raise:@"AudioFileDoesNotAllow64BitDataSizeError" format:@"The file offset was too large for the file type. The AIFF and WAVE file format types have 32-bit file size limits."];
                break;
            case kAudioFileInvalidPacketOffsetError:
                [NSException raise:@"AudioFileInvalidPacketOffsetError" format:@"A packet offset was past the end of the file, or not at the end of the file when a VBR format was written, or a corrupt packet size was read when the packet table was built."];
                break;
            case kAudioFileInvalidFileError:
                [NSException raise:@"AudioFileInvalidFileError" format:@"The file is malformed, or otherwise not a valid instance of an audio file of its type."];
                break;
            case kAudioFileOperationNotSupportedError:
                [NSException raise:@"AudioFileOperationNotSupportedError" format:@"The operation cannot be performed. For example, setting the kAudioFilePropertyAudioDataByteCount constant to increase the size of the audio data in a file is not a supported operation. Write the data instead."];
                break;
            case -50:
                [NSException raise:@"AudioFileBadParameter" format:@"An invalid parameter was passed, possibly the current packet and/or the inNumberOfPackets."];
                break;
            default:
                [NSException raise:@"AudioFileUknownError" format:@"An unknown error type %@ occured. [%s]", [NSNumber numberWithInteger:statusCode], (char*)&statusCode];
                break;
        }
    }
    
    + (AudioFileTypeID)hintForFileExtension:(NSString *)fileExtension
    {
        AudioFileTypeID fileTypeHint = kAudioFileAAC_ADTSType;
        if ([fileExtension isEqual:@"mp3"])
        {
            fileTypeHint = kAudioFileMP3Type;
        }
        else if ([fileExtension isEqual:@"wav"])
        {
            fileTypeHint = kAudioFileWAVEType;
        }
        else if ([fileExtension isEqual:@"aifc"])
        {
            fileTypeHint = kAudioFileAIFCType;
        }
        else if ([fileExtension isEqual:@"aiff"])
        {
            fileTypeHint = kAudioFileAIFFType;
        }
        else if ([fileExtension isEqual:@"m4a"])
        {
            fileTypeHint = kAudioFileM4AType;
        }
        else if ([fileExtension isEqual:@"mp4"])
        {
            fileTypeHint = kAudioFileMPEG4Type;
        }
        else if ([fileExtension isEqual:@"caf"])
        {
            fileTypeHint = kAudioFileCAFType;
        }
        else if ([fileExtension isEqual:@"aac"])
        {
            fileTypeHint = kAudioFileAAC_ADTSType;
        }
        return fileTypeHint;
    }
    
    -(id)initWithURL:(NSURL*)url andFileType:(NSString*)ext
    {
        self = [super init];
        if (self) {
            // Initialization code here.
            //OSStatus theErr = noErr;
            if([ext isEqualToString:@"mp3"])
            {
                _ism4a=FALSE;
            }
            else
                _ism4a=TRUE;
            firstTime=TRUE;
            _packetOffset=0;
            AudioFileTypeID hint=[MusicChunker hintForFileExtension:ext];
            OSStatus theErr = AudioFileOpenURL((CFURLRef)url, kAudioFileReadPermission, hint, &audioFile);
            if(theErr)
            {
                ReportAudioError(theErr);
    
            }
    
            UInt32 thePropertySize;// = sizeof(theFileFormat);
    
            thePropertySize = sizeof(fileDataSize);
            theErr = AudioFileGetProperty(audioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);
            if(theErr)
            {
                ReportAudioError(theErr);
    
            }
    
            theErr = AudioFileGetProperty(audioFile,    kAudioFilePropertyAudioDataPacketCount, &thePropertySize, &_totalPackets);
            if(theErr)
            {
                ReportAudioError(theErr);
    
            }
            /*
            UInt32 size;
    
            size= sizeof(_packetInfo);
            theErr= AudioFileGetProperty(audioFile, kAudioFilePropertyPacketTableInfo, &size, &_packetInfo);
      g(@"Key %@", key );
             }
    
            if(theErr)
            {
                ReportAudioError(theErr);
            }
            */
            UInt32 size;
            size=sizeof(_maxPacketSize);
            theErr=AudioFileGetProperty(audioFile,  kAudioFilePropertyMaximumPacketSize , &size, &_maxPacketSize);
    
            size = sizeof( _fileDataFormat );
            theErr=AudioFileGetProperty( audioFile, kAudioFilePropertyDataFormat, &size, &_fileDataFormat );
            _framesPerPacket=_fileDataFormat.mFramesPerPacket;
            _totalFrames=_fileDataFormat.mFramesPerPacket*_totalPackets;
    
            _fileHandle=[[NSFileHandle fileHandleForReadingFromURL:url error:nil] retain];    
            _fileLength=[_fileHandle seekToEndOfFile];
            _sampleRate=_fileDataFormat.mSampleRate;
            _totalReadBytes=0;
            /*
             AudioFramePacketTranslation tran;//= .mFrame = 0, .mPacket = packetCount - 1, .mFrameOffsetInPacket = 0 };
             tran.mFrame=0;
             tran.mFrameOffsetInPacket=0;
             tran.mPacket=1;
             UInt32 size=sizeof(tran);
             theErr=AudioFileGetProperty(audioFile, kAudioFilePropertyPacketToFrame, &size, &tran);
             */
            /*
             AudioBytePacketTranslation bt;
             bt.mPacket=4;
             bt.mByteOffsetInPacket=0;
             size=sizeof(bt);
             theErr=AudioFileGetProperty(audioFile, kAudioFilePropertyPacketToByte, &size, &bt);
             */
    
    
        }
    
        return self;
    }
    //gets next chunk that corresponds to seconds of audio
    -(NSData*)getNextDataChunk:(int)seconds
    {
    
        //NSLog(@"%d, total packets",_totalPackets);
    
        if(_packetOffset>=_totalPackets)
            return nil;
    
        //sampleRate * seconds = number of wanted frames
        int framesWanted= _sampleRate*seconds;
        NSData *header=nil;
        int wantedPackets=  framesWanted/_framesPerPacket;
        if(firstTime && _ism4a)
        {
            firstTime=false;
            //when we have a header that was stripped off, we grab it from the original file
            int totallen= [_fileHandle seekToEndOfFile];
            int dif=totallen-fileDataSize;
            [_fileHandle seekToFileOffset:0];
            header= [_fileHandle readDataOfLength:dif];
         }
    
    
    
        int packetOffset=_packetOffset+wantedPackets;
    
        //bound condition
        if(packetOffset>_totalPackets)
        {
            packetOffset=_totalPackets;
        }
    
    
        UInt32 outBytes;
    
        UInt32 packetCount = wantedPackets;
        int x=packetCount * _maxPacketSize;
        void *data = (void *)malloc(x);
    
        OSStatus theErr=AudioFileReadPackets(audioFile, false, &outBytes, NULL, _packetOffset, &packetCount, data);
    
        if(theErr)
        {
            ReportAudioError(theErr);
        }
        //calculate bytes to read
    
        int bytesRead=outBytes;
    
        //update read bytes
        _totalReadBytes+=bytesRead;
       // NSLog(@"total bytes read %d", _totalReadBytes);
        _packetOffset=packetOffset;
    
    
    
        NSData *subdata=[[NSData dataWithBytes:data length:outBytes] retain];    
    
        free(data);
    
        if(header)
        {
            NSMutableData *data=[[NSMutableData alloc]init];
            [data appendData:header];
            [data appendData:subdata];
            [subdata release];
            return [data autorelease];
        }
    
        return [subdata autorelease];
    }
    
    @end
    
    0 讨论(0)
提交回复
热议问题