How to get a rotated, zoomed and panned image from an UIImageView at its full resolution?

前端 未结 3 1933
没有蜡笔的小新
没有蜡笔的小新 2020-12-07 23:28

I have an UIImageView which can be rotated, panned and scaled with gesture recognisers. As a result it is cropped in its enclosing view. Everything is worki

相关标签:
3条回答
  • 2020-12-08 00:00

    I think Bellow Code Capture Your Current View ...

    - (UIImage *)captureView {
    
        CGRect rect = [self.view bounds];
    
        UIGraphicsBeginImageContext(rect.size);
        CGContextRef context = UIGraphicsGetCurrentContext();
        [self.yourImage.layer renderInContext:context];   
        UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
    
        return img;
    
    }
    

    I think you want to save the display screen and use it ,so i post this code... Hope,this help you... :)

    0 讨论(0)
  • 2020-12-08 00:07

    Why capturing the view if you have the original image? Just apply the transformations to it. Something like this may be a start:

    UIImage *image = [UIImage imageNamed:@"<# original #>"];
    
    CIImage *cimage = [CIImage imageWithCGImage:image.CGImage];
    
    // build the transform you want
    CGAffineTransform t = CGAffineTransformIdentity;
    CGFloat angle = [(NSNumber *)[self.faceImageView valueForKeyPath:@"layer.transform.rotation.z"] floatValue];
    CGFloat scale = [(NSNumber *)[self.faceImageView valueForKeyPath:@"layer.transform.scale"] floatValue];    
    t = CGAffineTransformConcat(t, CGAffineTransformMakeScale(scale, scale));
    t = CGAffineTransformConcat(t, CGAffineTransformMakeRotation(-angle));
    
    // create a new CIImage using the transform, crop, filters, etc.
    CIImage *timage = [cimage imageByApplyingTransform:t];
    
    // draw the result
    CIContext *context = [CIContext contextWithOptions:nil];
    CGImageRef imageRef = [context createCGImage:timage fromRect:[timage extent]];
    UIImage *result = [UIImage imageWithCGImage:imageRef];
    
    // save to disk
    NSData *png = UIImagePNGRepresentation(result);
    NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/result.png"];
    if (png && [png writeToFile:path atomically:NO]) {
        NSLog(@"\n%@", path);
    }
    CGImageRelease(imageRef);
    

    You can easily crop the output if that's what you want (see -[CIImage imageByCroppingToRect] or take into account the translation, apply a Core Image filter, etc. depending on what are your exact needs.

    0 讨论(0)
  • 2020-12-08 00:12

    The following code creates a snapshot of the enclosing view (superview of faceImageView with clipsToBounds set to YES) using a calculated scale factor.

    It assumes that the content mode of faceImageView is UIViewContentModeScaleAspectFit and that the frame of faceImageView is set to the enclosingView's bounds.

    - (UIImage *)captureView {
    
        float imageScale = sqrtf(powf(faceImageView.transform.a, 2.f) + powf(faceImageView.transform.c, 2.f));    
        CGFloat widthScale = faceImageView.bounds.size.width / faceImageView.image.size.width;
        CGFloat heightScale = faceImageView.bounds.size.height / faceImageView.image.size.height;
        float contentScale = MIN(widthScale, heightScale);
        float effectiveScale = imageScale * contentScale;
    
        CGSize captureSize = CGSizeMake(enclosingView.bounds.size.width / effectiveScale, enclosingView.bounds.size.height / effectiveScale);
    
        NSLog(@"effectiveScale = %0.2f, captureSize = %@", effectiveScale, NSStringFromCGSize(captureSize));
    
        UIGraphicsBeginImageContextWithOptions(captureSize, YES, 0.0);        
        CGContextRef context = UIGraphicsGetCurrentContext();
        CGContextScaleCTM(context, 1/effectiveScale, 1/effectiveScale);
        [enclosingView.layer renderInContext:context];   
        UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
    
        return img;
    }
    

    Depending on the current transform the resulting image will have a different size. For example when you zoom in, the size gets smaller. You can also set effectiveScale to a constant value in order to get an image with a constant size.

    Your gesture recognizer code does not limit the scale factor, i.e. you can zoom out/in without being limited. That can be very dangerous! My capture method can output really large images when you've zoomed out very much.

    If you have zoomed out the background of the captured image will be black. If you want it to be transparent, you must set the opaque-parameter of UIGraphicsBeginImageContextWithOptions to NO.

    0 讨论(0)
提交回复
热议问题