问题
can anyone give me some guidance on using Queues in AVFoundation please?
Later on in my app I want to do some processing on individual frames so I need to use AVCaptureVideoDataOutput.
To get started I thought I'd capture images and then write them (unprocessed) using AVAssetWriter.
I am successfully streaming frames from camera to image preview by setting up an AVCapture session as follows:
func initializeCameraAndMicrophone() {
// set up the captureSession
captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPreset1280x720 // set resolution to Medium
// set up the camera
let camera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
do {
let cameraInput = try AVCaptureDeviceInput(device: camera)
if captureSession.canAddInput(cameraInput){
captureSession.addInput(cameraInput)
}
} catch {
print("Error setting device camera input: \(error)")
return
}
videoOutputStream.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sampleBuffer", attributes: []))
if captureSession.canAddOutput(videoOutputStream)
{
captureSession.addOutput(videoOutputStream)
}
captureSession.startRunning()
}
Each new frame then triggers the captureOutput Delegate:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
{
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let bufferImage = UIImage(ciImage: cameraImage)
DispatchQueue.main.async
{
// send captured frame to the videoPreview
self.videoPreview.image = bufferImage
// if recording is active append bufferImage to video frame
while (recordingNow == true){
print("OK we're recording!")
/// Append images to video
while (writerInput.isReadyForMoreMediaData) {
let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: presentationTime)
frameCount += 1
}
}
}
}
So this streams frames to the image preview perfectly until I press the record button which calls the "startVideoRecording" function (which sets up AVAssetWriter). From that point on the Delegate never gets called again!
AVAssetWriter is being set up like this:
func startVideoRecording() {
guard let assetWriter = createAssetWriter(path: filePath!, size: videoSize) else {
print("Error converting images to video: AVAssetWriter not created")
return
}
// AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!
let sourceBufferAttributes : [String : AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB) as AnyObject,
kCVPixelBufferWidthKey as String : videoSize.width as AnyObject,
kCVPixelBufferHeightKey as String : videoSize.height as AnyObject,
]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)
// Start writing session
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: kCMTimeZero)
if (pixelBufferAdaptor.pixelBufferPool == nil) {
print("Error converting images to video: pixelBufferPool nil after starting session")
assetWriter.finishWriting{
print("assetWritter stopped!")
}
recordingNow = false
return
}
frameCount = 0
print("Recording started!")
}
I'm new to AVFoundation but I suspect I'm screwing up my queues somewhere.
回答1:
You have to use a separate serial queue for capturing video/audio.
Add this queue property to your class:
let captureSessionQueue: DispatchQueue = DispatchQueue(label: "sampleBuffer", attributes: [])
Start the session on captureSessionQueue, according to the Apple docs: The startRunning() method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive).
captureSessionQueue.async { captureSession.startRunning() }
Set this queue to your capture output pixel buffer delegate:
videoOutputStream.setSampleBufferDelegate(self, queue: captureSessionQueue)
Call startVideoRecording inside captureSessionQueue:
captureSessionQueue.async { startVideoRecording() }
In the captureOutput delegate method put all AVFoundation methods calls into captureSessionQueue.async:
DispatchQueue.main.async { // send captured frame to the videoPreview self.videoPreview.image = bufferImage captureSessionQueue.async { // if recording is active append bufferImage to video frame while (recordingNow == true){ print("OK we're recording!") /// Append images to video while (writerInput.isReadyForMoreMediaData) { let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS) let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration) pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: presentationTime) frameCount += 1 } } } }
来源:https://stackoverflow.com/questions/44616495/avassetwriter-queue-guidance-swift-3