High Quality Scaling of UIImage

前端 未结 4 1115
谎友^
谎友^ 2020-11-27 19:16

I need to scale the resolution of an image coming from a view layer in an iPhone application. The obvious way is to specify a scale factor in UIGraphicsBeginImageContextWit

相关标签:
4条回答
  • 2020-11-27 19:46

    I suppose you could use something like imagemagick. Apparently it's been successfully ported to iPhone: http://www.imagemagick.org/discourse-server/viewtopic.php?t=14089

    I've always been satisfied with the quality of images scaled by this library, so I think you'll be satisfied with the result.

    0 讨论(0)
  • 2020-11-27 19:53

    Swift extension:

    extension UIImage{
    
            // returns a scaled version of the image
            func imageScaledToSize(size : CGSize, isOpaque : Bool) -> UIImage{
    
                // begin a context of the desired size
                UIGraphicsBeginImageContextWithOptions(size, isOpaque, 0.0)
    
                // draw image in the rect with zero origin and size of the context
                let imageRect = CGRect(origin: CGPointZero, size: size)
                self.drawInRect(imageRect)
    
                // get the scaled image, close the context and return the image
                let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
                UIGraphicsEndImageContext()
    
                return scaledImage
           }
    }
    

    Example:

    aUIImageView.image = aUIImage.imageScaledToSize(aUIImageView.bounds.size, isOpaque : false)
    

    Set isOpaque to true if the image has no alpha: drawing will have better performance.

    0 讨论(0)
  • 2020-11-27 20:00

    About UIImage resize problem, this post give many ways to handle UIImage object. The UIImage has some orientation problems need to be fixed. This and Another post will address it.


    -(UIImage*)resizedImageToSize:(CGSize)dstSize
    {
        CGImageRef imgRef = self.CGImage;
        // the below values are regardless of orientation : for UIImages from Camera, width>height (landscape)
        CGSize  srcSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef)); // not equivalent to self.size (which is dependant on the imageOrientation)!
    
        /* Don't resize if we already meet the required destination size. */
        if (CGSizeEqualToSize(srcSize, dstSize)) {
            return self;
        }
    
        CGFloat scaleRatio = dstSize.width / srcSize.width;
    
        // Handle orientation problem of UIImage
        UIImageOrientation orient = self.imageOrientation;
        CGAffineTransform transform = CGAffineTransformIdentity;
        switch(orient) {
    
            case UIImageOrientationUp: //EXIF = 1
                transform = CGAffineTransformIdentity;
                break;
    
            case UIImageOrientationUpMirrored: //EXIF = 2
                transform = CGAffineTransformMakeTranslation(srcSize.width, 0.0);
                transform = CGAffineTransformScale(transform, -1.0, 1.0);
                break;
    
            case UIImageOrientationDown: //EXIF = 3
                transform = CGAffineTransformMakeTranslation(srcSize.width, srcSize.height);
                transform = CGAffineTransformRotate(transform, M_PI);
                break;
    
            case UIImageOrientationDownMirrored: //EXIF = 4
                transform = CGAffineTransformMakeTranslation(0.0, srcSize.height);
                transform = CGAffineTransformScale(transform, 1.0, -1.0);
                break;
    
            case UIImageOrientationLeftMirrored: //EXIF = 5
                dstSize = CGSizeMake(dstSize.height, dstSize.width);
                transform = CGAffineTransformMakeTranslation(srcSize.height, srcSize.width);
                transform = CGAffineTransformScale(transform, -1.0, 1.0);
                transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2);
                break;  
    
            case UIImageOrientationLeft: //EXIF = 6  
                dstSize = CGSizeMake(dstSize.height, dstSize.width);
                transform = CGAffineTransformMakeTranslation(0.0, srcSize.width);
                transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2);
                break;  
    
            case UIImageOrientationRightMirrored: //EXIF = 7  
                dstSize = CGSizeMake(dstSize.height, dstSize.width);
                transform = CGAffineTransformMakeScale(-1.0, 1.0);
                transform = CGAffineTransformRotate(transform, M_PI_2);
                break;  
    
            case UIImageOrientationRight: //EXIF = 8  
                dstSize = CGSizeMake(dstSize.height, dstSize.width);
                transform = CGAffineTransformMakeTranslation(srcSize.height, 0.0);
                transform = CGAffineTransformRotate(transform, M_PI_2);
                break;  
    
            default:  
                [NSException raise:NSInternalInconsistencyException format:@"Invalid image orientation"];  
    
        }  
    
        /////////////////////////////////////////////////////////////////////////////
        // The actual resize: draw the image on a new context, applying a transform matrix
        UIGraphicsBeginImageContextWithOptions(dstSize, NO, self.scale);
    
        CGContextRef context = UIGraphicsGetCurrentContext();
    
           if (!context) {
               return nil;
           }
    
        if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
            CGContextScaleCTM(context, -scaleRatio, scaleRatio);
            CGContextTranslateCTM(context, -srcSize.height, 0);
        } else {  
            CGContextScaleCTM(context, scaleRatio, -scaleRatio);
            CGContextTranslateCTM(context, 0, -srcSize.height);
        }
    
        CGContextConcatCTM(context, transform);
    
        // we use srcSize (and not dstSize) as the size to specify is in user space (and we use the CTM to apply a scaleRatio)
        CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, srcSize.width, srcSize.height), imgRef);
        UIImage* resizedImage = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
    
        return resizedImage;
    }
    
    0 讨论(0)
  • 2020-11-27 20:05

    I came up with this algorithm to create a half-size image:

    
    
    - (UIImage*) halveImage:(UIImage*)sourceImage {
    
        // Compute the target size
        CGSize sourceSize = sourceImage.size;
        CGSize targetSize;
        targetSize.width = (int) (sourceSize.width / 2);
        targetSize.height = (int) (sourceSize.height / 2);
    
        // Access the source data bytes
        NSData* sourceData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(sourceImage.CGImage));
        unsigned char* sourceBytes = (unsigned char *)[sourceData bytes];
    
        // Some info we'll need later
        CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(sourceImage.CGImage);
        int bitsPerComponent = CGImageGetBitsPerComponent(sourceImage.CGImage);
        int bitsPerPixel = CGImageGetBitsPerPixel(sourceImage.CGImage);
        int __attribute__((unused)) bytesPerPixel = bitsPerPixel / 8;
        int sourceBytesPerRow = CGImageGetBytesPerRow(sourceImage.CGImage);
        CGColorSpaceRef colorSpace = CGImageGetColorSpace(sourceImage.CGImage);
    
        assert(bytesPerPixel == 4);
        assert(bitsPerComponent == 8);
    
        // Bytes per row is (apparently) rounded to some boundary
        assert(sourceBytesPerRow >= ((int) sourceSize.width) * 4);
        assert([sourceData length] == ((int) sourceSize.height) * sourceBytesPerRow);
    
        // Allocate target data bytes
        int targetBytesPerRow = ((int) targetSize.width) * 4;
        // Algorigthm happier if bytes/row a multiple of 16
        targetBytesPerRow = (targetBytesPerRow + 15) & 0xFFFFFFF0;
        int targetBytesSize = ((int) targetSize.height) * targetBytesPerRow;
        unsigned char* targetBytes = (unsigned char*) malloc(targetBytesSize);
        UIImage* targetImage = nil;
    
        // Copy source to target, averaging 4 pixels into 1
        for (int row = 0; row < targetSize.height; row++) {
            unsigned char* sourceRowStart = sourceBytes + (2 * row * sourceBytesPerRow);
            unsigned char* targetRowStart = targetBytes + (row * targetBytesPerRow);
            for (int column = 0; column < targetSize.width; column++) {
    
                int sourceColumnOffset = 2 * column * 4;
                int targetColumnOffset = column * 4;
    
                unsigned char* sourcePixel = sourceRowStart + sourceColumnOffset;
                unsigned char* nextRowSourcePixel = sourcePixel + sourceBytesPerRow;
                unsigned char* targetPixel = targetRowStart + targetColumnOffset;
    
                uint32_t* sourceWord = (uint32_t*) sourcePixel;
                uint32_t* nextRowSourceWord = (uint32_t*) nextRowSourcePixel;
                uint32_t* targetWord = (uint32_t*) targetPixel;
    
                uint32_t sourceWord0 = sourceWord[0];
                uint32_t sourceWord1 = sourceWord[1];
                uint32_t sourceWord2 = nextRowSourceWord[0];
                uint32_t sourceWord3 = nextRowSourceWord[1];
    
                // This apparently bizarre sequence scales the data bytes by 4 so that when added together we'll get an average.  We do lose the least significant bits this way, and thus about half a bit of resolution.
                sourceWord0 = (sourceWord0 & 0xFCFCFCFC) >> 2;
                sourceWord1 = (sourceWord1 & 0xFCFCFCFC) >> 2;
                sourceWord2 = (sourceWord2 & 0xFCFCFCFC) >> 2;
                sourceWord3 = (sourceWord3 & 0xFCFCFCFC) >> 2;
    
                uint32_t resultWord = sourceWord0 + sourceWord1 + sourceWord2 + sourceWord3;
                targetWord[0] = resultWord;
            }
        }
    
        // Convert the bits to an image.  Supposedly CGCreateImage will dispose of the target bytes buffer.
        CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, targetBytes, targetBytesSize, NULL);
        CGImageRef targetRef = CGImageCreate(targetSize.width, targetSize.height, bitsPerComponent, bitsPerPixel, targetBytesPerRow, colorSpace, bitmapInfo, provider, NULL, FALSE, kCGRenderingIntentDefault);
        targetImage = [UIImage imageWithCGImage:targetRef];
    
        // Clean up
        CGColorSpaceRelease(colorSpace);
    
        // Return result
        return targetImage;
    }
    

    I tried just taking every other pixel of every other row, instead of averaging, but it resulted in an image about as bad as the default algorithm.

    0 讨论(0)
提交回复
热议问题