core-image

CIAreaHistogram inputScale factor

安稳与你 提交于 2019-12-03 15:30:45
I'm building an application that uses the CIAreaHistogram Core Image filter. I use an inputCount value (number of buckets) of 10 for testing, and an inputScale value of 1. I get the CIImage for the histogram itself, which I then run through a custom kernel (see end of post) to set alpha values to 1 (since otherwise the alpha value from the histogram calculations is premultiplied) and then convert it to an NSBitmapImageRep . I then scan through the image rep's buffer and print the RGB values (skipping the alpha values). However, when I do this, the sum of the R, G, and B values across the 10 do

How can I use CIFilter in iOS?

南楼画角 提交于 2019-12-03 14:12:30
Apple says that CIFilter is available in iOS. However, on my mac I couldn't find an CoreImage framework to link against. filter An optional Core Image filter object that provides the transition. @property(retain) CIFilter *filter i.e. when I try to do something like this, it crashes because CIFilter is unknown: [transition setFilter:[CIFilter filterWithName:@"CIShapedWaterRipple"]]; I linked against: #import <UIKit/UIKit.h> #import <QuartzCore/QuartzCore.h> #import <CoreGraphics/CoreGraphics.h> The following is an example of how I am generating a filtered UIImage on the iPhone using a CIFilter

Fastest YUV420P to RGBA conversion on iOS using the CPU

本小妞迷上赌 提交于 2019-12-03 13:00:20
问题 Can anyone recommend a really fast API, ideally NEON-optimized for doing YUV to RGB conversion at runtime on the iPhone using the CPU ? The accelerate framework's vImage doesn't provide anything suitable, sadly, and using vDSP, converting to floats and back seems suboptimal and almost as much work as writing NEON myself. I know how to use the GPU for this via a shader, and in fact already do so for displaying my main video plane. Unfortunately, I also need to create and save RGBA textures of

iOS6 : How to use the conversion feature of YUV to RGB from cvPixelBufferref to CIImage

末鹿安然 提交于 2019-12-03 12:45:19
From iOS6, Apple has given the provision to use native YUV to CIImage through this call initWithCVPixelBuffer:options: In the core Image Programming guide, they have mentioned about this feature Take advantage of the support for YUV image in iOS 6.0 and later. Camera pixel buffers are natively YUV but most image processing algorithms expect RBGA data. There is a cost to converting between the two. Core Image supports reading YUB from CVPixelBuffer objects and applying the appropriate color transform. options = @{ (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType

Any suggestions on how to handle this crash in CGImageDestinationFinalize?

依然范特西╮ 提交于 2019-12-03 09:15:28
My application reads and resizes images that are loaded from the internet; and unfortunately I can't control the creation of these images. Recently I had a crash that I am not sure how best to be able to handle. In this case the image was a corrupt GIF file. It wasn't badly corrupted but it was reporting a resolution size (height x width) that wasn't accurate. The image was supposed to be a 400x600 image but was reporting something like 1111x999. The code snippet that crashed is: - (void) saveRawImageRefToDisk:(CGImageRef)image withUUID:(NSString *)uuid andType:(NSString *)type { if (!uuid ||

iOS face detector orientation and setting of CIImage orientation

六眼飞鱼酱① 提交于 2019-12-03 08:54:16
EDIT found this code that helped with front camera images http://blog.logichigh.com/2008/06/05/uiimage-fix/ Hope others have had a similar issue and can help me out. Haven't found a solution yet. (It may seem a bit long but just a bunch of helper code) I'm using the ios face detector on images aquired from the camera (front and back) as well as images from the gallery (I'm using the UIImagePicker - for both image capture by camera and image selection from the gallery - not using avfoundation for taking pictures like in the squarecam demo) I am getting really messed up coordinates for the

Applying filter to real time camera preview - Swift

隐身守侯 提交于 2019-12-03 08:46:19
I'm trying to follow the answer given here: https://stackoverflow.com/a/32381052/8422218 to create an app which uses the back facing camera and adds a filter, then displays it on the screen in real time here is my code: // // ViewController.swift // CameraFilter // import UIKit import AVFoundation class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { var captureSession = AVCaptureSession() var backCamera: AVCaptureDevice? var frontCamera: AVCaptureDevice? var currentCamera: AVCaptureDevice? var photoOutput: AVCapturePhotoOutput? var cameraPreviewLayer:

CIGaussianBlur image size

一个人想着一个人 提交于 2019-12-03 08:24:24
问题 Hey want to blur my view, and i use this code: //Get a UIImage from the UIView NSLog(@"blur capture"); UIGraphicsBeginImageContext(BlurContrainerView.frame.size); [self.view.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); //Blur the UIImage CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage]; CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"];

How to create simple custom filter for iOS using Core Image Framework?

柔情痞子 提交于 2019-12-03 08:13:53
I want to use in my app an custom filter. Now I know that I need to use Core Image framework, but i not sure that is right way. Core Image framework uses for Mac OS and in iOS 5.0 - I'm not sure that could be used for custom CIFilter effects. Can you help me with this issues? Thanks all! Adam Wright OUTDATED You can't create your own custom kernels/filters in iOS yet. See http://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/CoreImaging/ci_intro/ci_intro.html , specifically: Although this document is included in the reference library, it has not been updated in

Xcode: compositing with alpha using core image

左心房为你撑大大i 提交于 2019-12-03 03:35:30
I'd like to create a CoreImage filter chain, and to be able to control the "intensity" of each filter in the chain by compositing its individual effect with alpha, or opacity settings, but I am not seeing a way to composite with alpha or opacity in the docs. I could jump out of Core image filter chain and composite with a core graphics context I guess. The CIColorMatrix filter can be used to alter the alpha component of a CIImage, which you can then composite onto a background image: CIImage *overlayImage = … // from file, CGImage etc CIImage *backgroundImage = … // likewise CGFloat alpha = 0