High Quality Scaling of UIImage

前端 未结 4 1130
谎友^
谎友^ 2020-11-27 19:16

I need to scale the resolution of an image coming from a view layer in an iPhone application. The obvious way is to specify a scale factor in UIGraphicsBeginImageContextWit

4条回答
  •  星月不相逢
    2020-11-27 20:05

    I came up with this algorithm to create a half-size image:

    
    
    - (UIImage*) halveImage:(UIImage*)sourceImage {
    
        // Compute the target size
        CGSize sourceSize = sourceImage.size;
        CGSize targetSize;
        targetSize.width = (int) (sourceSize.width / 2);
        targetSize.height = (int) (sourceSize.height / 2);
    
        // Access the source data bytes
        NSData* sourceData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(sourceImage.CGImage));
        unsigned char* sourceBytes = (unsigned char *)[sourceData bytes];
    
        // Some info we'll need later
        CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(sourceImage.CGImage);
        int bitsPerComponent = CGImageGetBitsPerComponent(sourceImage.CGImage);
        int bitsPerPixel = CGImageGetBitsPerPixel(sourceImage.CGImage);
        int __attribute__((unused)) bytesPerPixel = bitsPerPixel / 8;
        int sourceBytesPerRow = CGImageGetBytesPerRow(sourceImage.CGImage);
        CGColorSpaceRef colorSpace = CGImageGetColorSpace(sourceImage.CGImage);
    
        assert(bytesPerPixel == 4);
        assert(bitsPerComponent == 8);
    
        // Bytes per row is (apparently) rounded to some boundary
        assert(sourceBytesPerRow >= ((int) sourceSize.width) * 4);
        assert([sourceData length] == ((int) sourceSize.height) * sourceBytesPerRow);
    
        // Allocate target data bytes
        int targetBytesPerRow = ((int) targetSize.width) * 4;
        // Algorigthm happier if bytes/row a multiple of 16
        targetBytesPerRow = (targetBytesPerRow + 15) & 0xFFFFFFF0;
        int targetBytesSize = ((int) targetSize.height) * targetBytesPerRow;
        unsigned char* targetBytes = (unsigned char*) malloc(targetBytesSize);
        UIImage* targetImage = nil;
    
        // Copy source to target, averaging 4 pixels into 1
        for (int row = 0; row < targetSize.height; row++) {
            unsigned char* sourceRowStart = sourceBytes + (2 * row * sourceBytesPerRow);
            unsigned char* targetRowStart = targetBytes + (row * targetBytesPerRow);
            for (int column = 0; column < targetSize.width; column++) {
    
                int sourceColumnOffset = 2 * column * 4;
                int targetColumnOffset = column * 4;
    
                unsigned char* sourcePixel = sourceRowStart + sourceColumnOffset;
                unsigned char* nextRowSourcePixel = sourcePixel + sourceBytesPerRow;
                unsigned char* targetPixel = targetRowStart + targetColumnOffset;
    
                uint32_t* sourceWord = (uint32_t*) sourcePixel;
                uint32_t* nextRowSourceWord = (uint32_t*) nextRowSourcePixel;
                uint32_t* targetWord = (uint32_t*) targetPixel;
    
                uint32_t sourceWord0 = sourceWord[0];
                uint32_t sourceWord1 = sourceWord[1];
                uint32_t sourceWord2 = nextRowSourceWord[0];
                uint32_t sourceWord3 = nextRowSourceWord[1];
    
                // This apparently bizarre sequence scales the data bytes by 4 so that when added together we'll get an average.  We do lose the least significant bits this way, and thus about half a bit of resolution.
                sourceWord0 = (sourceWord0 & 0xFCFCFCFC) >> 2;
                sourceWord1 = (sourceWord1 & 0xFCFCFCFC) >> 2;
                sourceWord2 = (sourceWord2 & 0xFCFCFCFC) >> 2;
                sourceWord3 = (sourceWord3 & 0xFCFCFCFC) >> 2;
    
                uint32_t resultWord = sourceWord0 + sourceWord1 + sourceWord2 + sourceWord3;
                targetWord[0] = resultWord;
            }
        }
    
        // Convert the bits to an image.  Supposedly CGCreateImage will dispose of the target bytes buffer.
        CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, targetBytes, targetBytesSize, NULL);
        CGImageRef targetRef = CGImageCreate(targetSize.width, targetSize.height, bitsPerComponent, bitsPerPixel, targetBytesPerRow, colorSpace, bitmapInfo, provider, NULL, FALSE, kCGRenderingIntentDefault);
        targetImage = [UIImage imageWithCGImage:targetRef];
    
        // Clean up
        CGColorSpaceRelease(colorSpace);
    
        // Return result
        return targetImage;
    }
    

    I tried just taking every other pixel of every other row, instead of averaging, but it resulted in an image about as bad as the default algorithm.

提交回复
热议问题