Divide image in array of images with swift

前端 未结 3 1425
迷失自我
迷失自我 2021-01-03 02:33

I\'m trying to divide an image to create 16 out of it (in a matrix). I\'m using swift 2.1. Here\'s the code:

let cellSize = Int(originalImage.size.height) /          


        
3条回答
  •  执念已碎
    2021-01-03 03:09

    The fundamental issue is the difference between how UIImage and CGImage interpret their size. UIImage uses "points" and CGImage uses pixels. And the conversion factor is the scale.

    For example, if a UIImage has a scale of 3, every "point" in any given direction the UIImage, there are three pixels in that direction in the underlying CGImage. So for a UIImage that has a scale of 3 and a size of 100x100 points, the underlying CGImage has a size of 300x300 pixels.

    To return a simple array of images sliced by n x n (e.g. if n is three, there will be nine images in the array), you can do something like the following in Swift 3:

    /// Slice image into array of tiles
    ///
    /// - Parameters:
    ///   - image: The original image.
    ///   - howMany: How many rows/columns to slice the image up into.
    ///
    /// - Returns: An array of images.
    ///
    /// - Note: The order of the images that are returned will correspond
    ///         to the `imageOrientation` of the image. If the image's
    ///         `imageOrientation` is not `.up`, take care interpreting 
    ///         the order in which the tiled images are returned.
    
    func slice(image: UIImage, into howMany: Int) -> [UIImage] {
        let width: CGFloat
        let height: CGFloat
    
        switch image.imageOrientation {
        case .left, .leftMirrored, .right, .rightMirrored:
            width = image.size.height
            height = image.size.width
        default:
            width = image.size.width
            height = image.size.height
        }
    
        let tileWidth = Int(width / CGFloat(howMany))
        let tileHeight = Int(height / CGFloat(howMany))
    
        let scale = Int(image.scale)
        var images = [UIImage]()
    
        let cgImage = image.cgImage!
    
        var adjustedHeight = tileHeight
    
        var y = 0
        for row in 0 ..< howMany {
            if row == (howMany - 1) {
                adjustedHeight = Int(height) - y
            }
            var adjustedWidth = tileWidth
            var x = 0
            for column in 0 ..< howMany {
                if column == (howMany - 1) {
                    adjustedWidth = Int(width) - x
                }
                let origin = CGPoint(x: x * scale, y: y * scale)
                let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
                let tileCgImage = cgImage.cropping(to: CGRect(origin: origin, size: size))!
                images.append(UIImage(cgImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
                x += tileWidth
            }
            y += tileHeight
        }
        return images
    }
    

    Or, in Swift 2.3:

    func slice(image image: UIImage, into howMany: Int) -> [UIImage] {
        let width: CGFloat
        let height: CGFloat
    
        switch image.imageOrientation {
        case .Left, .LeftMirrored, .Right, .RightMirrored:
            width = image.size.height
            height = image.size.width
        default:
            width = image.size.width
            height = image.size.height
        }
    
        let tileWidth = Int(width / CGFloat(howMany))
        let tileHeight = Int(height / CGFloat(howMany))
    
        let scale = Int(image.scale)
        var images = [UIImage]()
        let cgImage = image.CGImage!
    
        var adjustedHeight = tileHeight
    
        var y = 0
        for row in 0 ..< howMany {
            if row == (howMany - 1) {
                adjustedHeight = Int(height) - y
            }
            var adjustedWidth = tileWidth
            var x = 0
            for column in 0 ..< howMany {
                if column == (howMany - 1) {
                    adjustedWidth = Int(width) - x
                }
                let origin = CGPoint(x: x * scale, y: y * scale)
                let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
                let tileCgImage = CGImageCreateWithImageInRect(cgImage, CGRect(origin: origin, size: size))!
                images.append(UIImage(CGImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
                x += tileWidth
            }
            y += tileHeight
        }
        return images
    }
    

    This makes sure that the resulting images are in the correct scale (this is why the above strides through the image in "points" and the multiplies to get the correct pixels in the CGImage). This also, if the dimensions, measured in "points") are not evenly divisible by n, it will making up the difference in the last image for that row or column, respectively. E.g. when you make three tiles for an image with a height of 736 points, the first two will be 245 points, but the last one will be 246 points).

    There is one exception that this does not (entirely) handle gracefully. Namely, if the UIImage has an imageOrientation of something other than .up, the order in which the images is retrieved corresponds to that orientation, not the upper left corner of the image as you view it.

提交回复
热议问题