core-image

Core Image CIColorControls brightness filter creates wrong effect. How do I change my image's luminance?

我只是一个虾纸丫 提交于 2019-11-30 00:24:13
I'm creating a color picker for iOS. I would like to enable the user to select the brightness (luminance) and have the color wheel reflect this change. I'm using Core Image to modify the brightness with the CIColorControls filter. Here's my code: -(CIImage *)oldPhoto:(CIImage *)img withBrightness:(float)intensity { CIFilter *lighten = [CIFilter filterWithName:@"CIColorControls"]; [lighten setValue:img forKey:kCIInputImageKey]; [lighten setValue:@((intensity * 2.0) - 1.0) forKey:@"inputBrightness"]; return lighten.outputImage; } Here's how the color wheel looks with intensity = 0.5

Key differences between Core Image and GPUImage

荒凉一梦 提交于 2019-11-29 23:32:05
What are the major differences between the Core Image and GPUImage frameworks (besides GPUImage being open source)? At a glance their interfaces seem pretty similar... Applying a series of filters to an input to create an output. I see a few small differences, such as the easy to use LookupFilter that GPUImage has. I am trying to figure out why someone would choose one over the other for a photo filtering application. Brad Larson As the author of GPUImage, you may want to take what I say with a grain of salt. I should first say that I have a tremendous amount of respect for the Core Image team

iOS: Core image and multi threaded apps

◇◆丶佛笑我妖孽 提交于 2019-11-29 14:57:05
问题 I am trying to run some core image filters in the most efficient way possible. Trying to avoid memory warnings and crashes, which I am getting when rendering large images. I am looking at Apple's Core Image Programming Guide. Regarding multi-threading it says: "each thread must create its own CIFilter objects. Otherwise, your app could behave unexpectedly." What does this mean? I am in fact attempting to run my filters on a background thread, so I can run an HUD on the main thread (see below)

Image auto-rotates after using CIFilter

北战南征 提交于 2019-11-29 11:42:28
I am writing an app that lets users take a picture and then edit it. I am working on implementing tools with UISliders for brightness/contrast/saturation and am using the Core Image Filter class to do so. When I open the app, I can take a picture and display it correctly. However, if I choose to edit a picture, and then use any of the described slider tools, the image will rotate counterclockwise 90 degrees. Here's the code in question: - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. self.navigationItem.hidesBackButton = YES; //hide default nav /

Formatting CIColorCube data

时光总嘲笑我的痴心妄想 提交于 2019-11-29 11:29:49
Recently, I've been trying to set up a CIColorCube on a CIImage to create a custom effect. Here's what I have now: uint8_t color_cube_data[8*4] = { 0, 0, 0, 1, 255, 0, 0, 1, 0, 255, 0, 1, 255, 255, 0, 1, 0, 0, 255, 1, 255, 0, 255, 1, 0, 255, 255, 1, 255, 255, 255, 1 }; NSData * cube_data =[NSData dataWithBytes:color_cube_data length:8*4*sizeof(uint8_t)]; CIFilter *filter = [CIFilter filterWithName:@"CIColorCube"]; [filter setValue:beginImage forKey:kCIInputImageKey]; [filter setValue:@2 forKey:@"inputCubeDimension"]; [filter setValue:cube_data forKey:@"inputCubeData"]; outputImage = [filter

How is filters UIScrollView/UICollectionView in Apple's Photos app implemented that it opens so fast?

放肆的年华 提交于 2019-11-29 08:45:35
I'm not asking about the exact code but the overall idea. Here is my problem: I'm trying to create something similar to filter choosing UI in Photos app. I've tried multiple approaches and all of them have their drawbacks. 1) I've tried using Operation and OperationQueue with a collection view, which prefetching is enabled. This loads the viewController fast but drops frames while scrolling. 2) Right now I'm using a scroll view and GCD but it loads the viewController too long (because it applies all filters to all the buttons inside it at once), but then it scrolls smoothly. NOTE: To answer

Is there a way to create a CGPath matching outline of a SKSpriteNode?

空扰寡人 提交于 2019-11-29 08:02:58
My goal is to create a CGPath that matches the outline of a SKSpriteNode. This would be useful in creating glows/outlines of SKSpriteNodes as well as a path for physics. One thought I have had, but I have not really worked much at all with CIImage, so I don't know if there is a way to access/modify images on a pixel level. Then maybe I would be able to port something like this to Objective-C : http://www.sakri.net/blog/2009/05/28/detecting-edge-pixels-with-marching-squares-algorithm/ Also very open to other approaches that make this process automated as opposed to me creating shape paths for

Determine the corners of a sheet of paper with iOS 5 AV Foundation and core-image in realtime

佐手、 提交于 2019-11-29 02:42:10
I am currently building a camera app prototype which should recognize sheets of paper lying on a table. The clue about this is that it should do the recognition in real time, so I capture the video stream of the camera, which in iOS 5 can easily be done with the AV foundation. I looked at here and here They are doing some basic object recognition there. I have found out that using OpenCV library in this realtime environment does not work in a performant way. So what I need is an algorithm to determine the edges of an image without OpenCV. Does anyone have some sample code snippets which lay

Converting CIImage Into NSImage

让人想犯罪 __ 提交于 2019-11-29 02:38:11
问题 I'm playing with the Core Image framework. As I understand, if I have an image ( NSImage ), it needs to be converted into CIImage , first. I can do that. NSImage *im1 = [[NSImage alloc] initWithContentsOfFile:imagepath]; NSRect rect1;rect1.size.width = img1.size.width; rect1.size.height = img1.size.height; CGImageRef imageRef1 = [img1 CGImageForProposedRect:&rect1 context:[NSGraphicsContext currentContext] hints:nil]; CIImage *ciimage = [CIImage imageWithCGImage:imageRef1]; I have a function

CIDetector and UIImagePickerController

半世苍凉 提交于 2019-11-29 02:31:33
I'm trying to implement the built-in iOS 5 face detection API. I'm using an instance of UIImagePickerController to allow the user to take a photo and then I'm trying to use CIDetector to detect facial features. Unfortunately, featuresInImage always returns an array of size 0. Here's the code: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { UIImage* picture = [info objectForKey:UIImagePickerControllerOriginalImage]; NSNumber *orientation = [NSNumber numberWithInt: [picture imageOrientation]]; NSDictionary *imageOptions =