core-image

Place Image on larger canvas size using GPU (possibly CIFilters) without using Image Context

瘦欲@ 提交于 2019-12-12 13:16:11
问题 Let's say I have an Image that's 100x100. I want to place the image onto a larger canvas size that's 500x500. My current approach is to use UIGraphics to create a Context, then draw the image onto the context. UIGraphics.BeginImageContext(....); ImageView.Draw (....); That works great, but it's not as fast as I'd like it to be for what I'm doing. I noticed that CIFilters are extremely fast. Is there a way I can place an image on a larger canvas size using CIFilters, or another method that

How to apply CIPhotoEffectMono filter on image in iOS?

血红的双手。 提交于 2019-12-12 11:45:38
问题 This is my code for filter: let filter = CIFilter(name: "CIPhotoEffectMono") filter!.setValue(CIImage(image: imageView.image!) , forKey: kCIInputImageKey) filter!.setValue(0.3, forKey: kCIInputIntensityKey) let context = CIContext(options:nil) let cgimg = context.createCGImage(filter!.outputImage!, fromRect: filter!.outputImage!.extent) let newImage = UIImage(CGImage:cgimg) self.imageView.image = newImage Here is the error message: Terminating app due to uncaught exception

Core Image Detector(CIDetector) is not detecting QRCodes

你说的曾经没有我的故事 提交于 2019-12-12 05:26:46
问题 I am trying to give the user of my app the ability to take an image from their library and scan the QRCode from it. Here is the relevant code: This is the function that returns from the Image Picker with the selected image. func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage, editingInfo: [String : AnyObject]?) { let thisImage:CIImage? = CIImage(image: image) let code = performQRCodeDetection(thisImage!) self.dismissViewControllerAnimated(true,

CGImageDestinationCreateWithData constants in iOS

谁都会走 提交于 2019-12-12 05:17:54
问题 I have the following code which turns a CGImage into NSData : import Foundation import CoreGraphics import ImageIO // ... snip ... let data = NSMutableData() if let dest = CGImageDestinationCreateWithData(data, kUTTypePNG, 1, nil), let image = self.backgroundImage { CGImageDestinationAddImage(dest, image, nil) if CGImageDestinationFinalize(dest) { return data as Data } } return nil The code compiles fine in Mac-OS, but kUTTypePNG is undefined in iOS. The actual value of the constant is

How to convert CAShapeLayer coordinates to CIRectangleFeature for manual crop

*爱你&永不变心* 提交于 2019-12-12 03:43:57
问题 I am dealing with manual image crop functionality. For this am drawing CAShapeLayer with four coordinates called topLeft, topRight, bottomRight and bottomLeft. User can pan the points and select the crop area. I am stuck at converting these points to Core image coordinates and further cropping with CIPerspectiveTransform. 回答1: Set image to imageView And masking imageView with shape layer. self.imageView = img imageView.layer.mask = shapeLayer // set here your shape layer Now your imageView

iOS CIFaceDetector very slow with Metal

孤人 提交于 2019-12-11 15:37:21
问题 I've been trying to apply filters to a certain part of the face detected in an image. In order to apply filters to the whole image, I used the sample code from apple: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/avcamfilter_applying_filters_to_a_capture_stream If I just add one line of detecting faces via CIDetector, to the method which sends out CVPixelBuffers to FilterRenderer class and then to MTKView to render the filtered buffer, the performance is

How to apply Core Image filters one at a time to save memory consumption?

别等时光非礼了梦想. 提交于 2019-12-11 11:46:57
问题 When you apply a number of Core Image filters to an image, memory can quickly become a limiting factor (and often leading to a crash of the application). I was therefore wondering what a good approach is to add one filter at a time and wait for each operation to complete. The example that I am working on involves one photo to which the user can apply various effects/filters. The user is presented with a small thumbnail to get an idea of what each filter looks like. When all the filters are

On iOS, can you add multiple CIFilters to a SpriteKit node?

南楼画角 提交于 2019-12-11 07:33:29
问题 On iOS, can you add more than one CIFilter to a SKEffectsNode ? CIFilterGenerator seems like what I want but it isn't available on iOS. I know you can use multiple filters on an image by passing the output of one as the input of the next, but that's not helpful if you want to affect non-image nodes. Does this mean I have to create an artificial hierarchy of SKEffectNode and add a filter to each of them, with my actual content at the very bottom? Is there a better way? 回答1: Where it's

Crop CMSampleBuffer and process it without converting to CGImage

痞子三分冷 提交于 2019-12-11 05:13:27
问题 I have been following the apple's live stream camera editor code to get the hold of live video editing. So far so good, but I need a way out to crop a sample buffer into 4 pieces and then process all four with different CIFilters. For instance, If the size of the image is 1000x1000, I want to crop the CMSampleBuffer into 4 images of size 250x250 and then apply unique filter to each, convert it back to CMSammpleBuffer and display on Metal View. Here is the code till which I could crop the

Programmatically creating an SKTileDefinition

一个人想着一个人 提交于 2019-12-11 04:24:50
问题 I've been beating my head against a wall for hours now. I am trying to modify a texture inside my app using a CIFilter and then use that new texture as a part of a new SKTileDefinition to recolor tiles on my map. The function bellow finds tiles that players "own" and attempts to recolor them by changing the SKTileDefinition to the coloredDefinition . func updateMapTileColoration(for players: Array<Player>){ for player in players { for row in 0..<mainBoardMap!.numberOfRows { for col in 0..