Record square video using AVFoundation and add watermark

后端 未结 1 1874
死守一世寂寞
死守一世寂寞 2020-12-17 06:33

Illustration of what I\'m trying to do

I\'m trying to do the following:

  • Play music
  • Record a square video ( I have a container in the view whi
相关标签:
1条回答
  • 2020-12-17 07:13

    A few things:

    As far as Audio goes, you're adding a Video (camera) input, but no Audio input. So do that to get sound.

        let audioInputDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
    
        do {
            let input = try AVCaptureDeviceInput(device: audioInputDevice)
    
            if sourceAVFoundation.captureSession.canAddInput(input) {
                sourceAVFoundation.captureSession.addInput(input)
            } else {
                NSLog("ERROR: Can't add audio input")
            }
        } catch let error {
            NSLog("ERROR: Getting input device: \(error)")
        }
    

    To make the video square, you're going to have to look at using AVAssetWriter instead of AVCaptureFileOutput. This is more complex, but you get more "power". You've created an AVCaptureSession already which is great, to hook up the AssetWriter, you'll need to do something like this:

        let fileManager = NSFileManager.defaultManager()
        let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
        guard let documentDirectory: NSURL = urls.first else {
            print("Video Controller: getAssetWriter: documentDir Error")
            return nil
        }
    
        let local_video_name = NSUUID().UUIDString + ".mp4"
        self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)
    
        guard let url = self.videoOutputURL else {
            return nil
        }
    
    
        self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)
    
        guard let writer = self.assetWriter else {
            return nil
        }
    
        //TODO: Set your desired video size here! 
        let videoSettings: [String : AnyObject] = [
            AVVideoCodecKey  : AVVideoCodecH264,
            AVVideoWidthKey  : captureSize.width,
            AVVideoHeightKey : captureSize.height,
            AVVideoCompressionPropertiesKey : [
                AVVideoAverageBitRateKey : 200000,
                AVVideoProfileLevelKey : AVVideoProfileLevelH264Baseline41,
                AVVideoMaxKeyFrameIntervalKey : 90,
            ],
        ]
    
        assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
        assetWriterInputCamera?.expectsMediaDataInRealTime = true
        writer.addInput(assetWriterInputCamera!)
    
        let audioSettings : [String : AnyObject] = [
            AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
            AVNumberOfChannelsKey : 2,
            AVSampleRateKey : NSNumber(double: 44100.0)
        ]
    
        assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
        assetWriterInputAudio?.expectsMediaDataInRealTime = true
        writer.addInput(assetWriterInputAudio!)
    

    Once you have the AssetWriter setup... then hook up some outputs for the Video and Audio

        let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
        let audioOutput = AVCaptureAudioDataOutput()
        audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
        captureSession.addOutput(audioOutput)
    
        // Always add video last...
        let videoOutput = AVCaptureVideoDataOutput()
        videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
        captureSession.addOutput(videoOutput)
        if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
            if connection.supportsVideoOrientation {
                // Force recording to portrait
                connection.videoOrientation = AVCaptureVideoOrientation.Portrait
            }
    
            self.outputConnection = connection
        }
    
    
        captureSession.startRunning()
    

    Finally you need to capture the buffers and process that stuff... Make sure you make your class a delegate of AVCaptureVideoDataOutputSampleBufferDelegate and AVCaptureAudioDataOutputSampleBufferDelegate

    //MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
    
        if !self.isRecordingStarted {
            return
        }
    
        if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {
    
            dispatch_async(audioQueue!) {
                audio.appendSampleBuffer(sampleBuffer)
            }
            return
        }
    
        if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
            dispatch_async(videoQueue!) {
                camera.appendSampleBuffer(sampleBuffer)
            }
        }
    }
    

    There are a few missing bits and pieces, but hopefully this is enough for you to figure it out along with the documentation.

    Finally, if you want to add the watermark, there are many ways this can be done in real time, but one possible way is to modify the sampleBuffer and write the watermark into the image then. You'll find other question on StackOverflow dealing with that.

    0 讨论(0)
提交回复
热议问题