问题
I am trying to write an helper function that applies a color mask to a given image. My function has to set all opaque pixels of an image to the same color.
Here is what I have so far :
extension UIImage {
func applyColorMask(color: UIColor, context: CIContext) -> UIImage {
guard let cgImageInput = self.cgImage else {
print("applyColorMask: \(self) has no cgImage attribute.")
return self
}
// Throw away existing colors, and fill the non transparent pixels with the input color
// s.r = dot(s, redVector), s.g = dot(s, greenVector), s.b = dot(s, blueVector), s.a = dot(s, alphaVector)
// s = s + bias
let colorFilter = CIFilter(name: "CIColorMatrix")!
let ciColorInput = CIColor(cgColor: color.cgColor)
colorFilter.setValue(CIVector(x: 0, y: 0, z: 0, w: 0), forKey: "inputRVector")
colorFilter.setValue(CIVector(x: 0, y: 0, z: 0, w: 0), forKey: "inputGVector")
colorFilter.setValue(CIVector(x: 0, y: 0, z: 0, w: 0), forKey: "inputBVector")
colorFilter.setValue(CIVector(x: 0, y: 0, z: 0, w: 1), forKey: "inputAVector")
colorFilter.setValue(CIVector(x: ciColorInput.red, y: ciColorInput.green, z: ciColorInput.blue, w: 0), forKey: "inputBiasVector")
colorFilter.setValue(CIImage(cgImage: cgImageInput), forKey: kCIInputImageKey)
if let cgImageOutput = context.createCGImage(colorFilter.outputImage!, from: colorFilter.outputImage!.extent) {
return UIImage(cgImage: cgImageOutput)
} else {
print("applyColorMask: failed to apply filter to \(self)")
return self
}
}
}
The code works fine for black and white but not what I expected when applying funnier colors. See the original image and the screenshots below: the same color is used for the border and for the image. Though they're different. My function is doing wrong. Did I miss something in the filter matrix ?
The original image (there's white dot at the center):
From top to bottom: The image filtered with UIColor(1.0, 1.0, 1.0, 1.0)
inserted into a UIImageView which has borders of the same color. Then the same with UIColor(0.6, 0.5, 0.4, 1.0)
. And finally with UIColor(0.2, 0.5, 1.0, 1.0)
EDIT
Running Filterpedia gives me the same result. My understanding of the CIColorMatrix filter may be wrong then. The documentation says:
This filter performs a matrix multiplication, as follows, to transform the color vector:
- s.r = dot(s, redVector)
- s.g = dot(s, greenVector)
- s.b = dot(s, blueVector)
- s.a = dot(s, alphaVector)
- s = s + bias
Then, let say I throw up all RGB data with (0,0,0,0) vectros, and then pass (0.5, 0, 0, 0) mid-red as the bias vector; I would expect my image to have all its fully opaque pixels to (127, 0, 0). The screenshots below shows that it is slightly lighter (red=186):
Here is some pseudo code I want to do:
// image "im" is a vector of pixels
// pixel "p" is struct of rgba values
// color "col" is the input color or a struct of rgba values
for (p in im) {
p.r = col.r
p.g = col.g
p.b = col.b
// Nothing to do with the alpha channel
}
回答1:
I finally wrote a CIColorKernel, as @dfd suggested and it works fine:
class ColorFilter: CIFilter {
var inputImage: CIImage?
var inputColor: CIColor?
let kernel: CIColorKernel = {
let kernelString = "kernel vec4 colorize(__sample pixel, vec4 color)\n"
+ "{\n"
+ " pixel.rgb = color.rgb;\n"
+ " return pixel;\n"
+ "}\n"
return CIColorKernel(source: kernelString)!
}()
override var outputImage: CIImage? {
guard let inputImage = inputImage else {
print("\(self) cannot produce output because no input image provided.")
return nil
}
guard let inputColor = inputColor else {
print("\(self) cannot produce output because no input color provided.")
return nil
}
let inputs = [inputImage, inputColor] as [Any]
return kernel.apply(extent: inputImage.extent, arguments: inputs)
}
}
To summarize, the CIColorMatrix I used first seems not to be linear (when using the bias vector). Giving a red 0,5 (float) value did not output an image with red 127 color in the [0-255] interval.
Writing a custom filter was my solution.
回答2:
Glad to have helped. If I may, one thing. You can send a color into a kernel - it's a vec4
, with the fourth value being the alpha channel. Just remember that CoreImage uses 0-1, not 0-254.
Here's the kernel code:
kernel vec4 colorize(__sample pixel, vec4 color) {
pixel.rgb = color.rgb;
return pixel;
}
It's pretty much the same as your's. But now all you need to do is create a CIColor
instead of an image. If you already have a UIColor called inputColor
, just do this:
let ciColor = CIColor(color: inputColor)
var inputs = [inputImage, ciColor] as [Any]
A couple of FYIs.
- There's also a
vec3
that can be used but since you already have aUIColor
`vec4 looks to be the easiest way. __sample
is the value of the pixel being processed, so it's "base type" is really avec4
.
来源:https://stackoverflow.com/questions/50216613/colorize-a-uiimage-in-swift