core-image

iOS - CoreImage - Add an effect to partial of image

梦想的初衷 提交于 2019-12-05 11:32:01
I just have a look on CoreImage framework on iOS 5, found that it's easy to add an effect to whole image. I wonder if possible to add an effect on special part of image (a rectangle). for example add gray scale effect on partial of image/ I look forward to your help. Thanks, Huy Felix Watch session 510 from the WWDC 2012 videos. They present a technique how to apply a mask to a CIImage . You need to learn how to chain the filters together. In particular take a look at: CICrop , CILinearGradient , CIRadialGradient (could be used to create the mask) CISourceOverCompositing (put mask images

Core Image filter CISourceOverCompositing not appearing as expected with alpha overlay

五迷三道 提交于 2019-12-05 04:24:39
I’m using CISourceOverCompositing to overlay text on top of an image and I’m getting unexpected results when the text image is not fully opaque. Dark colors are not dark enough and light colors are too light in the output image. I recreated the issue in a simple Xcode project . It creates an image with orange, white, black text drawn with 0.3 alpha, and that looks correct. I even threw that image into Sketch placing it on top of the background image and it looks great. The image at the bottom of the screen shows how that looks in Sketch. The problem is, after overlaying the text on the

Core Image - rendering a transparent image on CMSampleBufferRef result in black box around it

。_饼干妹妹 提交于 2019-12-05 01:23:09
问题 I'm trying to add a watermark/logo on a video that I'm recording using AVFoundation's AVCaptureVideoDataOutput. My class is set as the sampleBufferDelegate and receives the CMSamplebufferRefs. I already apply some effects to the CMSampleBufferRefs CVPixelBuffer and pass it back to the AVAssetWriter. The logo in the top left corner is delivered using a transparent PNG. The problem I'm having is that the transparent parts of the UIImage are black once written to the video. Anyone have an idea

Crash upon CGImageDestinationFinalize

泄露秘密 提交于 2019-12-05 01:17:36
My app allows users to edit photos using the Photos framework. I am seeing some crash reports, and it appears the crash occurs when generating the output image, but I am not sure where the problem lies. This crash is occurring on multiple hardware devices and multiple versions of iOS 9 including the latest 9.1. The last call my app makes is CGImageDestinationFinalize in order to create the edited image NSData . The crash reports show that calls continue in the CoreImage space before the crash occurs in what appears to be GLTextureManager. Could this an out of memory issue? Do you see the issue

How to get meaningful CIAreaHistogram output?

瘦欲@ 提交于 2019-12-05 00:28:55
问题 I want to calculate the histogram of a CGImage . I am using the CIAreaHistogram built-in CoreImage filter. Justin Mrkva has done something along similar lines. He says: I get the CIImage for the histogram itself, which I then run through a custom kernel (see end of post) to set alpha values to 1 (since otherwise the alpha value from the histogram calculations is premultiplied) and then convert it to an NSBitmapImageRep. My question is: is it possible to get the histogram data without having

Color Balance with Core Image

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-04 20:00:35
I'm trying to recreate a "filter" from Photoshop with Core Image. I got the easier stuff like exposure, vibrance, tone curve down, but not sure how to replicate a color balance with shadows, midtones, highlights. I've tried CIColorMatrix but it doesn't have the same effect on adjusting the colors of the respective shadows/midtone/highlights. CIHighlightShadowAdjust also does not create the same color effect as the Photoshop color balance. What can I use to replicate this Photoshop color balance, like shown in the screenshot below? I'll throw in some code here that I tried with colormatrix: let

How to detect hardware acceleration for Core Image?

南楼画角 提交于 2019-12-04 19:15:29
In my app I'm using a "CIMotionBlur" CIFilter during CALayer animation. The problem is that the filter does not work properly when hardware acceleration is not available: In OS X Safe Mode the layer becomes invisible during animation; When using VMWare Fusion the animation is unbearably slow and makes testing the app harder; Animation works fine without the filter. I'd like to apply filter only when hardware acceleration is available. What's the highest level API that would let me know when to disable the filter? I'm going to look for clues in IOKit. I found the answer in Technical Q&A QA1218

CIFaceFeature Bounds

泄露秘密 提交于 2019-12-04 18:17:47
While doing face detection work with CIFaceFeature , I ran into an issue with the bounds. While trying to put a box around the recognized face, the frame would always be misplaced. Other questions on Stack Overflow point out that the Core Image and UIKit coordinate systems are inverted. CoreImage Coordinate System UIKit Coordinate System (These images are from https://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html ) Obviously, this coordinate system difference is the reason for the frame misplacement. Now, the x-axis, width, and height remain the same. The only

How to extract dominant color from CIAreaHistogram?

笑着哭i 提交于 2019-12-04 17:46:52
I am looking to analyze the most dominant color in a UIImage on iOS (color present in the most pixels) and I stumbled upon Core Image's filter based API, particularly CIAreaHistogram. It seems like this filter could probably help me but I am struggling to understand the API. Firstly it says the output of the filter is a one-dimensional image which is the length of your input-bins and one pixel in height. How do I read this data? I basically want to figure out the color-value with the highest frequency so I am expecting the data to contain some kind of frequency count for each color, its not

Any suggestions on how to handle this crash in CGImageDestinationFinalize?

瘦欲@ 提交于 2019-12-04 14:51:44
问题 My application reads and resizes images that are loaded from the internet; and unfortunately I can't control the creation of these images. Recently I had a crash that I am not sure how best to be able to handle. In this case the image was a corrupt GIF file. It wasn't badly corrupted but it was reporting a resolution size (height x width) that wasn't accurate. The image was supposed to be a 400x600 image but was reporting something like 1111x999. The code snippet that crashed is: - (void)