Make an UIImage from a CMSampleBuffer

后端 未结 8 1194
温柔的废话
温柔的废话 2020-12-13 15:01

This is not the same as the countless questions about converting a CMSampleBuffer to a UIImage. I\'m simply wondering why I can\'t convert it like

8条回答
  •  独厮守ぢ
    2020-12-13 15:46

    TO ALL: don't use methods like:

        private let context = CIContext()
    
        private func imageFromSampleBuffer2(_ sampleBuffer: CMSampleBuffer) -> UIImage? {
            guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
            let ciImage = CIImage(cvPixelBuffer: imageBuffer)
            guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return nil }
            return UIImage(cgImage: cgImage)
        }
    

    they eat much more cpu, more time to convert

    use solution from https://stackoverflow.com/a/40193359/7767664

    don't forget to set next setting for AVCaptureVideoDataOutput

        videoOutput = AVCaptureVideoDataOutput()
    
        videoOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as String) : NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]
        //videoOutput.alwaysDiscardsLateVideoFrames = true
    
        videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "MyQueue"))
    

    convert method

        func imageFromSampleBuffer(_ sampleBuffer : CMSampleBuffer) -> UIImage {
            // Get a CMSampleBuffer's Core Video image buffer for the media data
            let  imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
            // Lock the base address of the pixel buffer
            CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
    
    
        // Get the number of bytes per row for the pixel buffer
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);
    
        // Get the number of bytes per row for the pixel buffer
        let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
        // Get the pixel buffer width and height
        let width = CVPixelBufferGetWidth(imageBuffer!);
        let height = CVPixelBufferGetHeight(imageBuffer!);
    
        // Create a device-dependent RGB color space
        let colorSpace = CGColorSpaceCreateDeviceRGB();
    
        // Create a bitmap graphics context with the sample buffer data
        var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
        bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
        //let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
        let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
        // Create a Quartz image from the pixel data in the bitmap graphics context
        let quartzImage = context?.makeImage();
        // Unlock the pixel buffer
        CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
    
        // Create an image object from the Quartz image
        let image = UIImage.init(cgImage: quartzImage!);
    
        return (image);
    }
    

提交回复
热议问题