core-image

How to extract dominant color from CIAreaHistogram?

泪湿孤枕 提交于 2020-01-13 06:04:33
问题 I am looking to analyze the most dominant color in a UIImage on iOS (color present in the most pixels) and I stumbled upon Core Image's filter based API, particularly CIAreaHistogram. It seems like this filter could probably help me but I am struggling to understand the API. Firstly it says the output of the filter is a one-dimensional image which is the length of your input-bins and one pixel in height. How do I read this data? I basically want to figure out the color-value with the highest

Having trouble creating UIImage from CIImage in iOS5

烈酒焚心 提交于 2020-01-09 06:50:41
问题 I'm using the AVFoundation framework. In my sample buffer delegate I have the following code: -(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ CVPixelBufferRef pb = CMSampleBufferGetImageBuffer(sampleBuffer); CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pb]; self.imageView.image = [UIImage imageWithCIImage:ciImage]; } I am able to use the CIImage to run the face detector etc. but

Memory leak when filtering with Core image

自古美人都是妖i 提交于 2020-01-04 02:43:07
问题 So I've been using core image to apply filters on images, everything is good except when I try to apply the same filter over and over again the application just quits, I guess its a memory leak. Here's the code: -(UIImage *) applyFilter: (UIImage*) picture { UIImageOrientation originalOrientation = picture.imageOrientation; CGFloat originalScale = picture.scale; CIImage *beginImage = [CIImage imageWithCGImage:picture.CGImage]; CIContext *context = [CIContext contextWithOptions:nil]; CIFilter

Replace exactly one pixel in an image and put it in another image via Swift

扶醉桌前 提交于 2020-01-03 04:53:19
问题 Simply put, if I have an image, I and another image J , I want to replace the RGB value at a position I(t,s) and assign that pixel to J(t,s) . How might I do this in Core Image, or using a custom kernel? This seems like it might not be an easy thing to do, considering the way Core Image works. However, I was wondering maybe there was a way to extract the value of the pixel at (t,s) , create an image K as large as J with just that pixel, and then overlay J with K only at that one point. Just

CIFilter available in iOS?

泄露秘密 提交于 2020-01-02 05:43:36
问题 I want to know for sure before I start implementing my own filters (probably using opengl ES) but are the CIFilters available on iOS? Anything similar? 回答1: CIFilters are not currently available in the SDK. If it's something you'd wish to see, you should file a request with Apple. 回答2: Update: CIFilters are now included in iOS 5. 来源: https://stackoverflow.com/questions/483039/cifilter-available-in-ios

Crash upon CGImageDestinationFinalize

主宰稳场 提交于 2020-01-02 00:53:13
问题 My app allows users to edit photos using the Photos framework. I am seeing some crash reports, and it appears the crash occurs when generating the output image, but I am not sure where the problem lies. This crash is occurring on multiple hardware devices and multiple versions of iOS 9 including the latest 9.1. The last call my app makes is CGImageDestinationFinalize in order to create the edited image NSData . The crash reports show that calls continue in the CoreImage space before the crash

Recording videos with real-time filters in Swift

*爱你&永不变心* 提交于 2020-01-01 05:47:05
问题 I am new to swift and trying to build a camera app which can apply real-time filters, and save with the applied filters. So far i can preview real-time with the applied filters, but when i save the video its all black. import UIKit import AVFoundation import AssetsLibrary import CoreMedia import Photos class ViewController: UIViewController , AVCaptureVideoDataOutputSampleBufferDelegate { var captureSession: AVCaptureSession! @IBOutlet weak var previewView: UIView! @IBOutlet weak var

What can be the solution of a deprecated of “EAGLContext”?

末鹿安然 提交于 2019-12-30 07:44:07
问题 I want to use the native filters for my app, the function works but I want to avoid methods that will be removed from the documentation. I search over the whole internet and no solution. I search over the whole internet and i haven't found any solution at my problem. public func applyFilterTo(image: UIImage, filterEffect: Filter) -> UIImage? { guard let cgImage = image.cgImage, let openGLContext = EAGLContext(api: .openGLES3) else { return nil } let context = CIContext(eaglContext:

iOS 7 Core Image QR Code generation too blur

梦想与她 提交于 2019-12-29 14:20:17
问题 here's my code for generating QRCode image + (UIImage *)generateQRCodeWithString:(NSString *)string { NSData *stringData = [string dataUsingEncoding:NSUTF8StringEncoding]; CIFilter *filter = [CIFilter filterWithName:@"CIQRCodeGenerator"]; [filter setValue:stringData forKey:@"inputMessage"]; [filter setValue:@"M" forKey:@"inputCorrectionLevel"]; return [UIImage imageWithCIImage:filter.outputImage]; } The result is too blur. Is it possible to set the size of the generated qr code? 回答1: I was

What's up with CITemperatureAndTint having vector inputs?

和自甴很熟 提交于 2019-12-29 05:26:09
问题 OK, so the Core Image filter Temperature and Tint has two inputs, neutral and targetNeutral. However, my biggest issue is the fact that they're both two-component vectors, meaning each has two numeric inputs. I would expect the first to be from say 2500 to 10000. What would the vector be for? 回答1: The essential purpose of performing temperature and tint adjustment is to correct the white balance of a captured image: to account for the ambient illumination of the scene and adjust colors so