core-image

Formatting CIColorCube data

删除回忆录丶 提交于 2019-11-28 05:12:31
问题 Recently, I've been trying to set up a CIColorCube on a CIImage to create a custom effect. Here's what I have now: uint8_t color_cube_data[8*4] = { 0, 0, 0, 1, 255, 0, 0, 1, 0, 255, 0, 1, 255, 255, 0, 1, 0, 0, 255, 1, 255, 0, 255, 1, 0, 255, 255, 1, 255, 255, 255, 1 }; NSData * cube_data =[NSData dataWithBytes:color_cube_data length:8*4*sizeof(uint8_t)]; CIFilter *filter = [CIFilter filterWithName:@"CIColorCube"]; [filter setValue:beginImage forKey:kCIInputImageKey]; [filter setValue:@2

Image auto-rotates after using CIFilter

不羁岁月 提交于 2019-11-28 05:07:47
问题 I am writing an app that lets users take a picture and then edit it. I am working on implementing tools with UISliders for brightness/contrast/saturation and am using the Core Image Filter class to do so. When I open the app, I can take a picture and display it correctly. However, if I choose to edit a picture, and then use any of the described slider tools, the image will rotate counterclockwise 90 degrees. Here's the code in question: - (void)viewDidLoad { [super viewDidLoad]; // Do any

Correct crop of CIGaussianBlur

ぃ、小莉子 提交于 2019-11-28 03:50:40
As I noticed when CIGaussianBlur is applied to image, image's corners gets blurred so that it looks like being smaller than original. So I figured out that I need to crop it correctly to avoid having transparent edges of image. But how to calculate how much I need to crop in dependence of blur amount? Example: Original image: Image with 50 inputRadius of CIGaussianBlur (blue color is background of everything): Eric McGary Take the following code as an example... CIContext *context = [CIContext contextWithOptions:nil]; CIImage *inputImage = [[CIImage alloc] initWithImage:image]; CIFilter

Is there a way to create a CGPath matching outline of a SKSpriteNode?

此生再无相见时 提交于 2019-11-28 01:41:05
问题 My goal is to create a CGPath that matches the outline of a SKSpriteNode. This would be useful in creating glows/outlines of SKSpriteNodes as well as a path for physics. One thought I have had, but I have not really worked much at all with CIImage, so I don't know if there is a way to access/modify images on a pixel level. Then maybe I would be able to port something like this to Objective-C : http://www.sakri.net/blog/2009/05/28/detecting-edge-pixels-with-marching-squares-algorithm/ Also

Getting a CGImage from CIImage

爷,独闯天下 提交于 2019-11-27 23:09:41
问题 I have a UIImage which is loaded from a CIImage with: tempImage = [UIImage imageWithCIImage:ciImage]; The problem is I need to crop tempImage to a specific CGRect and the only way I know how to do this is by using CGImage . The problem is that in the iOS 6.0 documentation I found this: CGImage If the UIImage object was initialized using a CIImage object, the value of the property is NULL. A. How to convert from CIImage to CGImage? I'm using this code but I have a memory leak (and can't

Determine the corners of a sheet of paper with iOS 5 AV Foundation and core-image in realtime

我怕爱的太早我们不能终老 提交于 2019-11-27 21:56:54
问题 I am currently building a camera app prototype which should recognize sheets of paper lying on a table. The clue about this is that it should do the recognition in real time, so I capture the video stream of the camera, which in iOS 5 can easily be done with the AV foundation. I looked at here and here They are doing some basic object recognition there. I have found out that using OpenCV library in this realtime environment does not work in a performant way. So what I need is an algorithm to

CIDetector and UIImagePickerController

有些话、适合烂在心里 提交于 2019-11-27 21:54:39
问题 I'm trying to implement the built-in iOS 5 face detection API. I'm using an instance of UIImagePickerController to allow the user to take a photo and then I'm trying to use CIDetector to detect facial features. Unfortunately, featuresInImage always returns an array of size 0. Here's the code: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { UIImage* picture = [info objectForKey:UIImagePickerControllerOriginalImage]; NSNumber

Having trouble creating UIImage from CIImage in iOS5

血红的双手。 提交于 2019-11-27 19:49:27
I'm using the AVFoundation framework. In my sample buffer delegate I have the following code: -(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ CVPixelBufferRef pb = CMSampleBufferGetImageBuffer(sampleBuffer); CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pb]; self.imageView.image = [UIImage imageWithCIImage:ciImage]; } I am able to use the CIImage to run the face detector etc. but it does not show up in the UIImageView ... the imageView remains white. Any ideas as to the problem? I

How to detect eye pupils and measure distance between pupils in iPhone

∥☆過路亽.° 提交于 2019-11-27 16:57:01
问题 I have studied a lot of example about face detection and also I have detected the eye in iPhone using CIDetector and HaarCascade_eye.xml . But I want to detect the pupils of eye and want to measure the distance between pupils. Please guide me something so that I could do that. 回答1: To calculate distance between two points using the following formula: This will get center points of the two eyes (as detected by CIDetector) and compare their locations to output the measurements you're looking

How can I generate a barcode from a string in Swift?

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-27 11:34:30
I am a new iOS developer. I was wondering how can I generate a barcode in Swift. I have the code already, there are multiple resources from where to learn how to read a barcode, but I didn't find any that talks about generating one from a string. Thanks a lot! P.S. I know there is a similar question about this, but it's for Objective-C. I don't know Obj-C and I find it difficult coming from .NET. You could use a CoreImage ( import CoreImage ) filter to do that! class Barcode { class func fromString(string : String) -> UIImage? { let data = string.data(using: .ascii) if let filter = CIFilter