UIImagePNGRepresentation and masked images

旧城冷巷雨未停 提交于 2019-12-22 10:29:29

问题


  1. I created a masked image using a function form an iphone blog:

    UIImage *imgToSave = [self maskImage:[UIImage imageNamed:@"pic.jpg"] withMask:[UIImage imageNamed:@"sd-face-mask.png"]];

  2. Looks good in a UIImageView

    UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
    imgView.center = CGPointMake(160.0f, 140.0f);
    [self.view addSubview:imgView];
    
  3. UIImagePNGRepresentation to save to disk:

    [UIImagePNGRepresentation(imgToSave) writeToFile:[self findUniqueSavePath] atomically:YES];

UIImagePNGRepresentation returns NSData of an image that looks different.

The output is inverse image mask. The area that was cut out in the app is now visible in the file. The area that was visible in the app is now removed. Visibility is opposite.

My mask is designed to remove everything but the face area in the picture. The UIImage looks right in the app but after I save it on disk, the file looks opposite. The face is removed but everything else this there.

Please let me know if you can help!


回答1:


I had the exact same issue, when I saved the file it was one way, but the image returned in memory was the exact opposite.

The culprit & the solution was UIImagePNGRepresentation(). It fixes the in-app image before saving it to disk, so I just inserted that function as the last step in creating the masked image and returning that.

This may not be the most elegant solution, but it works. I copied some code from my app and condensed it, not sure if this code below works as is, but if not, its close... maybe just some typos.

Enjoy. :)

// MyImageHelperObj.h

@interface MyImageHelperObj : NSObject

+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;

@end





// MyImageHelperObj.m

#import <QuartzCore/QuartzCore.h>
#import "MyImageHelperObj.h"


@implementation MyImageHelperObj


+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
{
    // create image size rect
    CGRect newRect = CGRectZero;
    newRect.size = newSize;

    // draw source image
    UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
    [sourceImage drawInRect:newRect];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

    // draw mask image
    [maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
    maskImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    // create grayscale version of mask image to make the "image mask"
    UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
    CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
    CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
    CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
    CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
    CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
    CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);

    CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
    CGImageRelease(imageMask);
    newImage = [UIImage imageWithCGImage:maskedImage];
    CGImageRelease(maskedImage);
    return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
}

+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
{
    //create gray device colorspace.
    CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
    //create 8-bit bimap context without alpha channel.
    CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
    CGColorSpaceRelease(space);
    //Draw image.
    CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
    CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
    //Get image from bimap context.
    CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
    CGContextRelease(bitmapContext);
    //image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
    UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
    CGImageRelease(grayScaleImage);
    return image;
}

@end



回答2:


In quartz you cam mask either by an image mask (black let through and white blocks), or a normal image (white let through and black blocks) which is the opposite. It seems for some reason saving is treating the image mask as a normal image to mask with. One thought is to render to a bitmap context and then create an image to be saved from that.



来源:https://stackoverflow.com/questions/4189166/uiimagepngrepresentation-and-masked-images

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!