Is it possible to render AVCaptureVideoPreviewLayer in a graphics context?

南楼画角 提交于 2019-11-28 18:47:55

I like @Roma's suggestion of using GPU Image - great idea. . . . however if you want a pure CocoaTouch approach, here's what to do:

Implement AVCaptureVideoDataOutputSampleBufferDelegate

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
    // Create a UIImage+Orientation from the sample buffer data
    if (_captureFrame)
    {
        [captureSession stopRunning];

        _captureFrame = NO;
        UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
        image = [image rotate:UIImageOrientationRight];

        _frameCaptured = YES;

        if (delegate != nil)
        {
            [delegate cameraPictureTaken:image];
        }
    }
}

Capture as Follows:

+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
                                             bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

Blend the UIImage with the overlay

  • Now that you have the UIImage, add it to a new UIView.
  • Add the overlay on top as a sub-view.

Capture the new UIView

+ (UIImage*)imageWithView:(UIView*)view
{
    UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen    mainScreen].scale);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return img;
}

I can advise you to try GPU Image.

https://github.com/BradLarson/GPUImage

It uses openGL, so it's rather fast. It can process pictures from camera and add filters to them (there are lot of them) including edge detection, motion detection and a far more

It's like OpenCV but based on my own experience GPU image is easier to connect with your project and the language is objective-c.

Problem could appear if you decided to use box2d for physics - is uses openGl too and you will need to spent some time till this 2 frameworks will stop fighting))

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!