blend two uiimages based on alpha/transparency of top image

一世执手 提交于 2019-11-26 18:21:42

This is what I've done in my app, similar to Tyler's - but without the UIImageView:

UIImage *bottomImage = [UIImage imageNamed:@"bottom.png"];
UIImage *image = [UIImage imageNamed:@"top.png"];

CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );

// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];

UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

If the image already has opacity, you do not need to set it (as in bottomImage) otherwise you can set it (as with image).

Tyler
UIImage* bottomImage = [UIImage imageNamed:@"bottom.png"];  
UIImage* topImage    = [UIImage imageNamed:@"top.png"];
UIImageView* imageView = [[UIImageView alloc] initWithImage:bottomImage];
UIImageView* subView   = [[UIImageView alloc] initWithImage:topImage];
subView.alpha = 0.5;  // Customize the opacity of the top image.
[imageView addSubview:subView];
UIGraphicsBeginImageContext(imageView.frame.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[subView release];
[imageView release];

[self doWhateverIWantWith: blendedImage];
Jason TEPOORTEN

my answer is based on Eric's answer, but allows for @2x images to retain their resolution after the image merge. Please note the URL REF in my comments, as I am acknowledging the sources that contributed assistance to my development of this function that I used in my iOS apps.

- (UIImage*) mergeTwoImages : (UIImage*) topImage : (UIImage*) bottomImage
{
    // URL REF: http://iphoneincubator.com/blog/windows-views/image-processing-tricks
    // URL REF: https://stackoverflow.com/questions/1309757/blend-two-uiimages?answertab=active#tab-top
    // URL REF: http://www.waterworld.com.hk/en/blog/uigraphicsbeginimagecontext-and-retina-display

    int width = bottomImage.size.width;
    int height = bottomImage.size.height;

    CGSize newSize = CGSizeMake(width, height);
    static CGFloat scale = -1.0;

    if (scale<0.0)
    {
        UIScreen *screen = [UIScreen mainScreen];

        if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 4.0)
        {
            scale = [screen scale];
        }
        else
        {
            scale = 0.0;    // Use the standard API
        }
    }

    if (scale>0.0)
    {
        UIGraphicsBeginImageContextWithOptions(newSize, NO, scale);
    }
    else
    {
        UIGraphicsBeginImageContext(newSize);
    }

    [bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
    [topImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];

    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return newImage;
}

Blending with alpha

UIGraphicsBeginImageContext(area.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextRetain(context);

// mirroring context
CGContextTranslateCTM(context, 0.0, area.size.height);
CGContextScaleCTM(context, 1.0, -1.0);

for (...) {
    CGContextBeginTransparencyLayer(context, nil);
    CGContextSetAlpha( context, alpha );
    CGContextDrawImage(context, area, tempimg.CGImage);
    CGContextEndTransparencyLayer(context);
}

// get created image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
CGContextRelease(context);
UIGraphicsEndImageContext();

Swift 3

This function will take two images, and a CGSize and return an optional UIImage. It works best when both images are the same size. If your top image has alpha it will show the bottom image through it.

// composit two images
func compositeTwoImages(top: UIImage, bottom: UIImage, newSize: CGSize) -> UIImage? {
    // begin context with new size
    UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
    // draw images to context
    bottom.draw(in: CGRect(origin: CGPoint.zero, size: newSize))
    top.draw(in: CGRect(origin: CGPoint.zero, size: newSize))
    // return the new image
    let newImage = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()
    // returns an optional
    return newImage
}

Usage

let outputSize = CGSize(width: 100, height: 100)
if let topImage = UIImage(named: "myTopImage") {
    if let bottomImage = UIImage(named: "myBottomImage") {
        // composite both images
        if let finalImage = compositeTwoImages(top: topImage, bottom: bottomImage, newSize: outputSize) {
            // do something with finalImage
        }
    }
}

Can you provide detail in what you mean by "it does not seem to work?" Does it draw only one image or the other image? Draw black? Noise? Crash? Why have you chosen kCGBlendModeSourceIn; what effect are you trying to achieve (there are dozens of ways to blend images)? Do either of your images have alpha already?

I assume what you're trying to do is mix two images such that each has 50% opacity? Use CGContextSetAlpha() for that rather than CGContextSetBlendMode().

You can use UIImage's drawInRect: or drawAtPoint: instead of CGContextDrawImage (they draw to the current context). Does using them give you any difference in output?

It may also be helpful to make sure the UIImage* values you are getting back from imageNamed: are valid.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!