I\'m using some tricks to try to read the raw output of an AVAssetWriter while it is being written to disk. When I reassemble the individual files by concatenating them, the
I ended up abandoning the "read while it's written" approach in favor of a manual chunking approach where I call finishWriting
every 5 seconds on a background thread. I was able to drop a negligible number of frames using a method originally described here:
- (void) segmentRecording:(NSTimer*)timer {
AVAssetWriter *tempAssetWriter = self.assetWriter;
AVAssetWriterInput *tempAudioEncoder = self.audioEncoder;
AVAssetWriterInput *tempVideoEncoder = self.videoEncoder;
self.assetWriter = queuedAssetWriter;
self.audioEncoder = queuedAudioEncoder;
self.videoEncoder = queuedVideoEncoder;
//NSLog(@"Switching encoders");
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
[tempAudioEncoder markAsFinished];
[tempVideoEncoder markAsFinished];
if (tempAssetWriter.status == AVAssetWriterStatusWriting) {
if(![tempAssetWriter finishWriting]) {
[self showError:[tempAssetWriter error]];
}
}
if (self.readyToRecordAudio && self.readyToRecordVideo) {
NSError *error = nil;
self.queuedAssetWriter = [[AVAssetWriter alloc] initWithURL:[self newMovieURL] fileType:(NSString *)kUTTypeMPEG4 error:&error];
if (error) {
[self showError:error];
}
self.queuedVideoEncoder = [self setupVideoEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:videoFormatDescription bitsPerSecond:videoBPS];
self.queuedAudioEncoder = [self setupAudioEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:audioFormatDescription bitsPerSecond:audioBPS];
//NSLog(@"Encoder switch finished");
}
});
}
Full source code: https://github.com/chrisballinger/FFmpeg-iOS-Encoder/blob/master/AVSegmentingAppleEncoder.m