This seems like a simple task, yet it is driving me nuts. Is it possible to convert a UIView containing AVCaptureVideoPreviewLayer as a sublayer into an image to be
I like @Roma's suggestion of using GPU Image - great idea. . . . however if you want a pure CocoaTouch approach, here's what to do:
Implement AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage+Orientation from the sample buffer data
if (_captureFrame)
{
[captureSession stopRunning];
_captureFrame = NO;
UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
image = [image rotate:UIImageOrientationRight];
_frameCaptured = YES;
if (delegate != nil)
{
[delegate cameraPictureTaken:image];
}
}
}
Capture as Follows:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Blend the UIImage with the overlay
Capture the new UIView
+ (UIImage*)imageWithView:(UIView*)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I can advise you to try GPU Image.
https://github.com/BradLarson/GPUImage
It uses openGL, so it's rather fast. It can process pictures from camera and add filters to them (there are lot of them) including edge detection, motion detection and a far more
It's like OpenCV but based on my own experience GPU image is easier to connect with your project and the language is objective-c.
Problem could appear if you decided to use box2d for physics - is uses openGl too and you will need to spent some time till this 2 frameworks will stop fighting))