问题
I am using a private library which was made for live broadcasting from iPhone.
In every time of recording each frame it call a delegate function
void MyAQInputCallback(void *inUserData,
AudioQueueRef inQueue,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc);
Now how I can append this inBuffer to my AVAssetWriterInput as usual:
[self.audioWriterInput appendSampleBuffer:sampleBuffer];
I think maybe convert AudioQueueBufferRef to CMSampleBufferRef somehow?
Thank you.
回答1:
I don't suppose you are still looking for a solution two years later, but just in case someone is in a similar situation and finds this question (as I did), here is my solution.
My Audio Queue callback function calls the appendAudioBuffer function below, passing it the AudioQueueBufferRef and its length (mAudioDataByteSize).
void appendAudioBuffer(void* pBuffer, long pLength)
{
// CMSampleBuffers require a CMBlockBuffer to hold the media data; we
// create a blockBuffer here from the AudioQueueBuffer's data.
CMBlockBufferRef blockBuffer;
OSStatus status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
pBuffer,
pLength,
kCFAllocatorNull,
NULL,
0,
pLength,
kCMBlockBufferAssureMemoryNowFlag,
&blockBuffer);
// Timestamp of current sample
CFAbsoluteTime currentTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = currentTime - mStartTime;
CMTime timeStamp = CMTimeMake(elapsedTime * mTimeScale, mTimeScale);
// Number of samples in the buffer
long nSamples = pLength / mWaveRecorder->audioFormat()->mBytesPerFrame;
CMSampleBufferRef sampleBuffer;
OSStatus err = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
blockBuffer,
true,
NULL,
NULL,
mAudioFormatDescription,
nSamples,
timeStamp,
NULL,
&sampleBuffer);
// Add the audio sample to the asset writer input
if ([mAudioWriterInput isReadyForMoreMediaData]) {
if(![mAudioWriterInput appendSampleBuffer:sampleBuffer])
// print an error
}
else
// either do nothing and just print an error, or queue the CMSampleBuffer
// somewhere and add it later, when the AVAssetWriterInput is ready
CFRelease(sampleBuffer);
CFRelease(blockBuffer);
}
Note that the sound is not compressed when I call appendAudioBuffer; the audio format is specified as LPCM (which is why I don't use packet descriptors, as there are none for LPCM). The AVAssetWriterInput handles the compression.
I originally tried to pass AAC data to the AVAssetWriter, but this led to way too much complication and I was unable to get it to work.
来源:https://stackoverflow.com/questions/20212320/how-to-get-cmsamplebufferref-from-audioqueuebufferref