AVAssetWriter queue guidance Swift 3

耗尽温柔 提交于 2019-12-10 21:50:48

问题


can anyone give me some guidance on using Queues in AVFoundation please?

Later on in my app I want to do some processing on individual frames so I need to use AVCaptureVideoDataOutput.

To get started I thought I'd capture images and then write them (unprocessed) using AVAssetWriter.

I am successfully streaming frames from camera to image preview by setting up an AVCapture session as follows:

func initializeCameraAndMicrophone() {

// set up the captureSession
    captureSession = AVCaptureSession()
    captureSession.sessionPreset = AVCaptureSessionPreset1280x720 // set resolution to Medium

// set up the camera
    let camera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)

    do {
        let cameraInput = try AVCaptureDeviceInput(device: camera)
        if captureSession.canAddInput(cameraInput){
            captureSession.addInput(cameraInput)
        }
    } catch {
        print("Error setting device camera input: \(error)")
        return
    }

    videoOutputStream.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sampleBuffer", attributes: []))

    if captureSession.canAddOutput(videoOutputStream)
    {
        captureSession.addOutput(videoOutputStream)
    }

    captureSession.startRunning()
}

Each new frame then triggers the captureOutput Delegate:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
{
    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
    let bufferImage = UIImage(ciImage: cameraImage)

    DispatchQueue.main.async
      {
        // send captured frame to the videoPreview
        self.videoPreview.image = bufferImage


        // if recording is active append bufferImage to video frame
        while (recordingNow == true){

            print("OK we're recording!")

            /// Append images to video 
            while (writerInput.isReadyForMoreMediaData) {

                let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
                let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)

                pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: presentationTime)


                frameCount += 1              
            }
        }
    }
}

So this streams frames to the image preview perfectly until I press the record button which calls the "startVideoRecording" function (which sets up AVAssetWriter). From that point on the Delegate never gets called again!

AVAssetWriter is being set up like this:

func startVideoRecording() {


    guard let assetWriter = createAssetWriter(path: filePath!, size: videoSize) else {
        print("Error converting images to video: AVAssetWriter not created")
        return
    }

    // AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
    let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!


    let sourceBufferAttributes : [String : AnyObject] = [
        kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB) as AnyObject,
        kCVPixelBufferWidthKey as String : videoSize.width as AnyObject,
        kCVPixelBufferHeightKey as String : videoSize.height as AnyObject,
        ]

    let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)

    // Start writing session
    assetWriter.startWriting()
    assetWriter.startSession(atSourceTime: kCMTimeZero)
    if (pixelBufferAdaptor.pixelBufferPool == nil) {
        print("Error converting images to video: pixelBufferPool nil after starting session")

        assetWriter.finishWriting{
            print("assetWritter stopped!")
        }
        recordingNow = false

        return
    }

    frameCount = 0

    print("Recording started!")
}

I'm new to AVFoundation but I suspect I'm screwing up my queues somewhere.


回答1:


You have to use a separate serial queue for capturing video/audio.

  1. Add this queue property to your class:

    let captureSessionQueue: DispatchQueue = DispatchQueue(label: "sampleBuffer", attributes: [])
    
  2. Start the session on captureSessionQueue, according to the Apple docs: The startRunning() method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive).

    captureSessionQueue.async {
        captureSession.startRunning()
    }
    
  3. Set this queue to your capture output pixel buffer delegate:

    videoOutputStream.setSampleBufferDelegate(self, queue: captureSessionQueue)
    
  4. Call startVideoRecording inside captureSessionQueue:

    captureSessionQueue.async {
        startVideoRecording()
    }
    
  5. In the captureOutput delegate method put all AVFoundation methods calls into captureSessionQueue.async:

    DispatchQueue.main.async
      {
    
        // send captured frame to the videoPreview
        self.videoPreview.image = bufferImage
    
        captureSessionQueue.async {
            // if recording is active append bufferImage to video frame
            while (recordingNow == true){
    
                print("OK we're recording!")
    
                /// Append images to video 
                while (writerInput.isReadyForMoreMediaData) {
    
                    let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
                    let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
    
                    pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: presentationTime)
    
    
                    frameCount += 1              
                }
            }
        }
    }
    


来源:https://stackoverflow.com/questions/44616495/avassetwriter-queue-guidance-swift-3

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!