CGContext: how do I erase pixels (e.g. kCGBlendModeClear) outside of a bitmap context?

前端 未结 3 1813
梦毁少年i
梦毁少年i 2021-02-01 09:29

I\'m trying to build an eraser tool using Core Graphics, and I\'m finding it incredibly difficult to make a performant eraser - it all comes down to:

CGContextSetB

3条回答
  •  终归单人心
    2021-02-01 10:08

    2D graphics following painting paradigms. When you are painting, it's hard to remove paint you've already put on the canvas, but super easy to add more paint on top. The blend modes with a bitmap context give you a way to do something hard (scrape paint off the canvas) with few lines of code. The few lines of code do not make it an easy computing operation (which is why it performs slowly).

    The easiest way to fake clearing out pixels without having to do the offscreen bitmap buffering is to paint the background of your view over the image.

    -(void)drawRect:(CGRect)rect
    {
        if (drawingStroke) {
            CGColor lineCgColor = lineColor.CGColor;
            if (eraseModeOn) {
                //Use concrete background color to display erasing. You could use the backgroundColor property of the view, or define a color here
                lineCgColor = [[self backgroundColor] CGColor];
            } 
            [curImage drawAtPoint:CGPointZero];
            CGContextRef context = UIGraphicsGetCurrentContext();
            CGContextAddPath(context, currentPath);
            CGContextSetLineCap(context, kCGLineCapRound);
            CGContextSetLineWidth(context, lineWidth);
            CGContextSetBlendMode(context, kCGBlendModeNormal);
            CGContextSetStrokeColorWithColor(context, lineCgColor);
            CGContextStrokePath(context);
        } else {
            [curImage drawAtPoint:CGPointZero];
        }
    }
    

    The more difficult (but more correct) way is to do the image editing on a background serial queue in response to an editing event. When you get a new action, you do the bitmap rendering in the background to an image buffer. When the buffered image is ready, you call setNeedsDisplay to allow the view to be redrawn during the next update cycle. This is more correct as drawRect: should be displaying the content of your view as quickly as possible, not processing the editing action.

    @interface ImageEditor : UIView
    
    @property (nonatomic, strong) UIImage * imageBuffer;
    @property (nonatomic, strong) dispatch_queue_t serialQueue;
    @end
    
    @implementation ImageEditor
    
    - (dispatch_queue_t) serialQueue
    {
        if (_serialQueue == nil)
        {
            _serialQueue = dispatch_queue_create("com.example.com.imagebuffer", DISPATCH_QUEUE_SERIAL);
        }
        return _serialQueue;
    }
    
    - (void)editingAction
    {
        dispatch_async(self.serialQueue, ^{
            CGSize bufferSize = [self.imageBuffer size];
    
            UIGraphicsBeginImageContext(bufferSize);
    
            CGContext context = UIGraphicsGetCurrentContext();
    
            CGContextDrawImage(context, CGRectMake(0, 0, bufferSize.width, bufferSize.height), [self.imageBuffer CGImage]);
    
            //Do editing action, draw a clear line, solid line, etc
    
            self.imageBuffer = UIGraphicsGetImageFromCurrentImageContext();
            UIGraphicsEndImageContext();
    
            dispatch_async(dispatch_get_main_queue(), ^{
                [self setNeedsDisplay];
            });
        });
    }
    -(void)drawRect:(CGRect)rect
    {
        [self.imageBuffer drawAtPoint:CGPointZero];
    }
    
    @end
    

提交回复
热议问题