core-image

Can't save CIImage to file on iOS without memory leaks

梦想的初衷 提交于 2019-12-11 02:38:33
问题 The following snippet of code save a CIImage to disk using an UIImage . - (void)applicationWillResignActive:(UIApplication *)application { NSString* filename = @"Test.png"; UIImage *image = [UIImage imageNamed:filename]; // make some image processing then store the output CIImage *processedImage = [CIImage imageWithCGImage:image.CGImage]; #if 1// save using context CIContext *context = [CIContext contextWithOptions:nil]; CGImageRef cgiimage = [context createCGImage:processedImage fromRect

CIColorMap doesn't apply gradient uniformly to input image

[亡魂溺海] 提交于 2019-12-11 02:23:43
问题 I created a CIColorMap filter by loading the attached blue-to-red gradient image. Then I tried to apply this to a gray scale input image, which is simply a linear gradient from black to white. When I draw the out image of CIColorMap in a CIContext, the intention is to render black color in dark blue, white color in dark red, and mid gray in white. However the actual result looks like: As you can see, the mid gray is actually mapped to light blue, instead of white as I have expected. Below is

How to get AVPlayer to redraw current AVItem videoComposition when paused

会有一股神秘感。 提交于 2019-12-11 00:27:29
问题 I'm building a simple video editor for macOS: A movie file is loaded as an AVAsset, transformed by a series of CIFilters in a AVVideoComposition, and played by an AVPlayer. I present UI controls for some of the parameters of the CIFilters. When video is playing everything is working great, I slide sliders and effects change! But when the video is paused the AVPlayerView doesn't redraw after the controls in the UI are changed. How can I encourage the AVPlayerView to redraw the contents of the

How to make use of kCIFormatRGBAh to get half floats on iOS with Core Image?

浪尽此生 提交于 2019-12-10 21:35:47
问题 I'm trying to get the per-pixel RGBA values for a CIImage in floating point. I expect the following to work, using CIContext and rendering as kCIFormatRGBAh , but the output is all zeroes. Otherwise my next step would be converting from half floats to full. What am I doing wrong? I've also tried this in Objective-C and get the same result. let image = UIImage(named: "test")! let sourceImage = CIImage(CGImage: image.CGImage) let context = CIContext(options: [kCIContextWorkingColorSpace: NSNull

Filter Live camera feed

冷暖自知 提交于 2019-12-10 17:41:30
问题 So i've been using UIImagepickercontroller to access the camera for photo and video capture, then i wanted to apply filters on those 2 sources, i succeeded with filtering token photos but i'am having trouble finding the solution for the rest, all i need is to access the raw image data : the live image feed that the camera is showing , apply the filter and then show the filtered ones instead. Any help or advice will be appreciated. 回答1: UIImagePickerController doesn't give you low level access

Getting UIImage from CIImage does not work properly

夙愿已清 提交于 2019-12-10 17:19:18
问题 I am having trouble with getting a UIImage from and CIImage. The line of code below works fine on iOS6: (Output image is an CIImage) self.imageView = [UIImage imageWithCIImage:outputImage]; or [self.imageView setImage:[UIImage imageWithCIImage:outputImage]]; When I run this same line of code on a device that is running iOS 5 the imageView is blank. If I log the size property of the UIImage it is correct but the image never displays on the screen. When I use a CGImageRef (as shown below) it

Core Image: after using CICrop, applying a compositing filter doesn't line up

非 Y 不嫁゛ 提交于 2019-12-10 14:53:23
问题 I'm using CICrop to crop an image to a certain size by cutting off the top and bottom of the image. Afterwards, I apply something like the CIMultiplyCompositing filter, to combine the cropped image with another image. Both images are the same size, however the result shows that the two images don't line up... one is offset. So, I checked the following: NSLog(@"image after crop: %g, %g, %g, %g", imageToFilter.extent.origin.x, imageToFilter.extent.origin.y, imageToFilter.extent.size.width,

What's the “need a swizzler so that RGB8 can be read” about that Core Image give iOS9?

人走茶凉 提交于 2019-12-10 13:33:59
问题 First of all,i thought a solution to it ,but it's not a good way.I will give in the last. when i deal with Filter In iOS9 i got "need a swizzler so that RGB8 can be read" error message and the return image is total black by this method [self.context createCGImage:self.outputImage fromRect:[self.outputImage extent]]; in here - (UIImage *)fliterImage:(UIImage *)input flitername:(NSString *)name { NSString * fliter_name = name; self.context = [CIContext contextWithOptions:nil]; UIImage *image;

Core Image GPU performance too slow

末鹿安然 提交于 2019-12-10 12:19:36
问题 I was playing with Core Image Filters and encountered a strange benchmark. With the following 2 functions; one processing heavy math on cpu and other on gpu as the name suggests, cpu performance is about a hundred times faster than the gpu performance. I tried "CILineOverlay" and "CIPhotoEffectProcess" filters and measured the transforming time with DispatchTime.now() method. Am I doing something wrong? Or is it related to deprecated opengl support? private func apply_cpu(to image:UIImage?,

UIImage masking with gesture

廉价感情. 提交于 2019-12-10 11:35:26
问题 I'm trying to achieve selective color feature in iOS. I personally think that first draw shape using finger gesture and convert that into mask, But at the same time it should be real time, It should work as i move my finger across the grayscale image. Can anyone direct me to correct path. Sample app : https://itunes.apple.com/us/app/color-splash/id304871603?mt=8 Thanks. 回答1: You can position two UIImageViews over each other, the color version in the background and the black&white version in