Is this code drawing at the point or pixel level? How to draw retina pixels?

前端 未结 3 1199
北海茫月
北海茫月 2020-12-03 18:56

Consider this admirable script which draws a (circular) gradient,

https://github.com/paiv/AngleGradientLayer/blob/master/AngleGradient/AngleGradientLayer.m



        
3条回答
  •  感情败类
    2020-12-03 19:42

    So, based on the magnificent answer of KenThomases, and a day of testing, here's exactly how you draw at physical pixel level. I think.

    class PixelwiseLayer: CALayer {
    
        override init() {
    
            super.init()
            // SET THE CONTENT SCALE AT INITIALIZATION TIME
            contentsScale = UIScreen.main.scale
        }
    
        required init?(coder aDecoder: NSCoder) {
            fatalError("init(coder:) has not been implemented")
        }
    
        override open func draw(in ctx: CGContext) {
    
            let rectDEVICESPACE = ctx.convertToDeviceSpace(bounds).size
            // convertToDeviceSpace >>KNOWS ABOUT CONTENT SCALE<<
            // and YOU have CORRECTLY SET content scale at initialization time
    
            // write pixels to DEVICE SPACE, BUT ...
            let img = pixelByPixelImage(sizeInDeviceSpace: rectDEVICESPACE)
    
            // ... BUT the draw# call uses only the NORMAL BOUNDS
            ctx.draw(img, in: bounds)
        }
    
        private func pixelByPixelImage(sizeInDeviceSpace: CGSize) -> CGImage {
    
            let wPIXELS = Int(sizeInDeviceSpace.width)
            let hPIXELS = Int(sizeInDeviceSpace.height)
            // !!!THAT IS ACTUAL PIXELS!!!
    
            // you !!!DO NOT!!! need to multiply by UIScreen.main.scale,
            // as is seen in much example code.
            // convertToDeviceSpace does it properly.
    
            let bitsPerComponent: Int = MemoryLayout.size * 8
            let bytesPerPixel: Int = bitsPerComponent * 4 / 8
            let colorSpace: CGColorSpace = CGColorSpaceCreateDeviceRGB()
            let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
    
            var data = [RGBA]()
    
            for y in 0..

    The critical elements:

    first ...

            super.init()
            // SET THE CONTENT SCALE >>>>AT INITIALIZATION TIME<<<<
            contentsScale = UIScreen.main.scale
    

    second ...

        override open func draw(in ctx: CGContext) {
    
            realPixelSize = ctx.convertToDeviceSpace(bounds).size
            ...
        }
    

    third ...

        override open func draw(in ctx: CGContext) {
    
            ...
            your image = yourPixelDrawingFunction( realPixelSize ) // NOT BOUNDS
            ctx.draw(img, in: bounds)  // NOT REALPIXELSIZE
        }
    

    Example ...

    console:
    contentsScale 3.0
    UIScreen.main.scale 3.0
    bounds (0.0, 0.0, 84.0, 84.0)
    rectDEVICESPACE (252.0, 252.0)
    actual pixels being created as data: w, h 252, 252
    

    It's absolutely critical to set contentsScale at initialization time.

    I tried some os versions, and it seems for better or worse the default for layers for contentsScale is unfortunately "1" rather than screen density, so, do not forget to set it!!! (Note that other systems in the OS will use it, also, to know how to handle your layer efficiently, etc.)

提交回复
热议问题