ciimage

CIImage drawing EXC_BAD_ACCESS

六眼飞鱼酱① 提交于 2019-12-08 02:34:08
问题 So, I have a CIImage that I'm attempting to draw in an NSView 's -drawRect method. This is the line of code that I call to draw the image: [outputCoreImage drawInRect: [self bounds] fromRect: originalBounds operation: NSCompositeSourceOver fraction: 1]; outputCoreImage , originalBounds , and [self bounds] are all non- nil , and indeed are their respective expected values. On Lion (OS X 10.7), this worked fine, however on Mountain Lion (OS X 10.8) I receive an EXC_BAD_ACCESS on this line. If I

How do I combine two ore more CIImage

五迷三道 提交于 2019-12-07 14:32:15
问题 How do I combine two or more CIImage into another one. I tried using ciContext.drawImage. How do I get a CGImage from it ?? What I have understood is that I am writing into a context from which I should be able to get an Image. Please let me know if I have understood anything wrong. let eaglContext = EAGLContext(API: .OpenGLES2) let ciContext = CIContext(EAGLContext: eaglContext) ciContext.drawImage(firstCIImage, inRect: firstRect, fromRect: secondRect) ciContext.drawImage(secondCIImage,

Given a CIImage, what is the fastest way to write image data to disk?

强颜欢笑 提交于 2019-12-07 06:58:13
问题 I'm working with PhotoKit and have implemented filters users can apply to photos in their Photo Library. I am currently obtaining the image, applying a filter, returning the edited version as a CIImage , then I convert the CIImage into NSData using UIImageJPEGRepresentation so I may write that out to disk. While this works beautifully, when users attempt to edit really large (like 30 MB) photos it can take upwards of 30 seconds for this to occur, with 98% of the time spent on

IOS: Ambiguous Use of init(CGImage)

两盒软妹~` 提交于 2019-12-07 03:23:36
问题 I am trying to convert a CGImage into a CIImage ; however, it is not working. This line of code: let personciImage = CIImage(CGImage: imageView.image!.CGImage!) throws the following error Ambiguous use of 'init(CGImage)' I'm really confused as to what this error means. I need to do this conversion because CIDetector.featuresInImage() from the built in CoreImage framework requires a CIImage 回答1: I solved it on my own. It turns out, I was capitalizing CGImage wrong. The code should really read:

SceneKit, flip direction of SCNMaterial

江枫思渺然 提交于 2019-12-07 00:26:17
问题 extremely new to SceneKit, so just looking for help here: I have an SCNSphere with a camera at the center of it I create an SCNMaterial, doubleSided, and assign it to the sphere Since the camera is at the center, the image looks flipped vertically, which when having text inside totally messes things up. So how can i flip the material, or the image (although later it will be frames from a video), any other suggestion is welcome. This solution, btw, is failing on me, normalImage is applied as a

CIImage drawing EXC_BAD_ACCESS

北城以北 提交于 2019-12-06 13:42:45
So, I have a CIImage that I'm attempting to draw in an NSView 's -drawRect method. This is the line of code that I call to draw the image: [outputCoreImage drawInRect: [self bounds] fromRect: originalBounds operation: NSCompositeSourceOver fraction: 1]; outputCoreImage , originalBounds , and [self bounds] are all non- nil , and indeed are their respective expected values. On Lion (OS X 10.7), this worked fine, however on Mountain Lion (OS X 10.8) I receive an EXC_BAD_ACCESS on this line. If I walk up the stack, I find that the internal function call that breaks is on CGLGetPixelFormat . frame

Objective-C : No Matter what I do CIDetector is always nil

旧巷老猫 提交于 2019-12-06 05:16:20
Trying to get a simple Proof of concept going with Apple's face detection API. I've looked at a couple of other examples including Apple's SquareCam, and this one https://github.com/jeroentrappers/FaceDetectionPOC based on these, it seems like I am following the correct pattern to get the APIs going, but I am stuck. No matter what I do, the CIDetector for my face detector is always nil!!! I would seriously appreciate any help, clues - hints - suggestions! -(void)initCamera{ session = [[AVCaptureSession alloc]init]; AVCaptureDevice *device; /* if([self frontCameraAvailable]){ device = [self

Image rotating after CIFilter

拜拜、爱过 提交于 2019-12-05 20:36:14
I'm applying a CIFilter to a portrait image. For some reason, it gets rotated 90 clockwise. How can I fix this? My code is below var imgOrientation = oImage.imageOrientation var imgScale = oImage.scale let originalImage = CIImage(image: oImage) var filter = CIFilter(name: "CIPhotoEffect"+arr[sender.tag-1000]) filter.setDefaults() filter.setValue(originalImage, forKey: kCIInputImageKey) var outputImage = filter.outputImage var newImage = UIImage(CIImage:outputImage, scale:imgScale, orientation:imgOrientation) cameraStill.image = newImage I'm going to guess that the problem is this line: var

SceneKit, flip direction of SCNMaterial

狂风中的少年 提交于 2019-12-05 05:02:51
extremely new to SceneKit, so just looking for help here: I have an SCNSphere with a camera at the center of it I create an SCNMaterial, doubleSided, and assign it to the sphere Since the camera is at the center, the image looks flipped vertically, which when having text inside totally messes things up. So how can i flip the material, or the image (although later it will be frames from a video), any other suggestion is welcome. This solution, btw, is failing on me, normalImage is applied as a material (but the image is flipped when looking from inside the sphere), but assigning flippedImage

How to extract dominant color from CIAreaHistogram?

笑着哭i 提交于 2019-12-04 17:46:52
I am looking to analyze the most dominant color in a UIImage on iOS (color present in the most pixels) and I stumbled upon Core Image's filter based API, particularly CIAreaHistogram. It seems like this filter could probably help me but I am struggling to understand the API. Firstly it says the output of the filter is a one-dimensional image which is the length of your input-bins and one pixel in height. How do I read this data? I basically want to figure out the color-value with the highest frequency so I am expecting the data to contain some kind of frequency count for each color, its not