core-image

Rendering small CIImage centered in MTKView

你。 提交于 2019-12-23 04:58:19
问题 I'm rendering a CIImage to MTKView and the image is smaller than the drawable. let centered = image.transformed(by: CGAffineTransform(translationX: (view.drawableSize.width - image.extent.width) / 2, y: (view.drawableSize.height - image.extent.height) / 2)) context.render(centered, to: drawable.texture, commandBuffer: buffer, bounds: centered.extent, colorSpace: CGColorSpaceCreateDeviceRGB()) I'd expect the code above to render the image in the center of the view, but the image is positioned

An example use of CIGaussianGradient filter for Core Image

你离开我真会死。 提交于 2019-12-22 18:42:39
问题 I am looking for a code sample of this Core Image filter for iOS. Those filters with the inputImage in the parameter, I can figure out how to implement. But the ones without the inputImage parameter, I am not sure how it works. Here is the extract from Apple's doc: CIGaussianGradient Generates a gradient that varies from one color to another using a Gaussian distribution. Parameters inputCenter A CIVector class whose attribute type is CIAttributeTypePosition and whose display name is Center.

CGImageDestinationFinalize or UIImageJPEGRepresentation - Crash when saving a large file on IOS 10

半城伤御伤魂 提交于 2019-12-22 10:33:46
问题 I am trying to create tiles for a larger image, and it seems that as of IOS 10 the following code no longer works and crashes with EXC_BAD_ACCESS. This happens on IOS 10 Device only, IOS 9 works fine. The crash happens with any image that is larger than ~1300x1300. Profiling in instruments doesn't yield anything interesting and points to CGImageDestinationFinalize. There is no memory spike. I tried both ways below: UIImage* tempImage = [UIImage imageWithCGImage:tileImage]; NSData* imageData =

CIFaceFeature Bounds

不问归期 提交于 2019-12-22 00:29:27
问题 While doing face detection work with CIFaceFeature , I ran into an issue with the bounds. While trying to put a box around the recognized face, the frame would always be misplaced. Other questions on Stack Overflow point out that the Core Image and UIKit coordinate systems are inverted. CoreImage Coordinate System UIKit Coordinate System (These images are from https://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html) Obviously, this coordinate system difference is the

CIDetector won't release memory - swift

吃可爱长大的小学妹 提交于 2019-12-21 20:43:46
问题 After the face detection is done the memory will not release, is there is a way I could release it (the memory stay at 300MB after the process is done). autoreleasepool{ manager.requestImageData(for: asset, options: option){ (data, responseString, imageOriet, info) in if (data != nil){ //let faces = (faceDetector?.features(in: CIImage(data: data!)!)) guard let faces = self.faceDetector?.features(in: CIImage(data: data!)!) else { return } completionHandler((faces.count)) }else{ print(info) } }

Horizontal Flip of a frame in Objective-C

旧时模样 提交于 2019-12-21 19:28:05
问题 I am trying to create a filter for my program (which streams a webcam) which makes the frame flip horizontally, making the webcam act like a mirror. However, while it compiles and runs, the filter does not seem to have any effect on it. Here is the code: CIImage *resultImage = image; CIFilter *flipFilter = [CIFilter filterWithName:@"CIAffineTransform"]; [flipFilter setValue:resultImage forKey:@"inputTransform"]; NSAffineTransform* flipTransform = [NSAffineTransform transform]; [flipTransform

Horizontal Flip of a frame in Objective-C

跟風遠走 提交于 2019-12-21 19:27:14
问题 I am trying to create a filter for my program (which streams a webcam) which makes the frame flip horizontally, making the webcam act like a mirror. However, while it compiles and runs, the filter does not seem to have any effect on it. Here is the code: CIImage *resultImage = image; CIFilter *flipFilter = [CIFilter filterWithName:@"CIAffineTransform"]; [flipFilter setValue:resultImage forKey:@"inputTransform"]; NSAffineTransform* flipTransform = [NSAffineTransform transform]; [flipTransform

Memory usage keeps rising on older devices using Metal

▼魔方 西西 提交于 2019-12-21 15:18:13
问题 I use Metal and CADisplayLink to live filter a CIImage and render it into a MTKView . // Starting display link displayLink = CADisplayLink(target: self, selector: #selector(applyAnimatedFilter)) displayLink.preferredFramesPerSecond = 30 displayLink.add(to: .current, forMode: .default) @objc func applyAnimatedFilter() { ... metalView.image = filter.applyFilter(image: ciImage) } According to the memory monitor in Xcode, memory usage is stable on iPhone X and never goes above 100mb, on devices

CIAreaHistogram inputScale factor

蓝咒 提交于 2019-12-21 04:58:14
问题 I'm building an application that uses the CIAreaHistogram Core Image filter. I use an inputCount value (number of buckets) of 10 for testing, and an inputScale value of 1. I get the CIImage for the histogram itself, which I then run through a custom kernel (see end of post) to set alpha values to 1 (since otherwise the alpha value from the histogram calculations is premultiplied) and then convert it to an NSBitmapImageRep . I then scan through the image rep's buffer and print the RGB values

Render dynamic text onto CVPixelBufferRef while recording video

喜夏-厌秋 提交于 2019-12-21 01:12:51
问题 I'm recording video and audio using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput and in the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, I want to draw text onto each individual sample buffer I'm receiving from the video connection. The text changes with about every frame (it's a stopwatch label) and I want that to be recorded on top of the video data that's captured. Here's what I've been able to come up with so far: //1. CVPixelBufferRef pixelBuffer =