ios AVFoundation tap to focus

烈酒焚心 提交于 2019-11-29 21:30:23
Konrad Lang

You have to adjust the touchPoint to a range of [0,1] using something like the following code:

    CGRect screenRect = [[UIScreen mainScreen] bounds];
    screenWidth = screenRect.size.width;
    screenHeight = screenRect.size.height;  
    double focus_x = thisFocusPoint.center.x/screenWidth;
    double focus_y = thisFocusPoint.center.y/screenHeight;

    [[self captureManager].videoDevice lockForConfiguration:&error];
    [[self captureManager].videoDevice setFocusPointOfInterest:CGPointMake(focus_x,focus_y)];
    [[self captureManager].videoDevice unlockForConfiguration];

The documentation on this can be found in Apple - AV Foundation Programming Guidelines - see section Media Capture where you will find information on Focus Modes:

If it’s supported, you set the focal point using focusPointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode.

Catalin
UITapGestureRecognizer *shortTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleTapToFocus:)];
shortTap.numberOfTapsRequired=1;
shortTap.numberOfTouchesRequired=1;
[viewCanvasRecording addGestureRecognizer:shortTap];

and then this:

- (void)handleTapToFocus:(UITapGestureRecognizer *)tapGesture
{
    AVCaptureDevice *acd=!currentFrontCamera ? captureBackInput.device : captureFrontInput.device;

    if (tapGesture.state == UIGestureRecognizerStateEnded)
    {
        CGPoint thisFocusPoint = [tapGesture locationInView:viewCanvasRecording];

        double focus_x = thisFocusPoint.x/viewCanvasRecording.frame.size.width;
        double focus_y = thisFocusPoint.y/viewCanvasRecording.frame.size.height;

        if ([acd isFocusModeSupported:AVCaptureFocusModeAutoFocus] && [acd isFocusPointOfInterestSupported])
        {
            if ([acd lockForConfiguration:nil])
            {
                [acd setFocusMode:AVCaptureFocusModeAutoFocus];
                [acd setFocusPointOfInterest:CGPointMake(focus_x, focus_y)];

                /*
                if ([acd isExposureModeSupported:AVCaptureExposureModeAutoExpose] && [acd isExposurePointOfInterestSupported])
                {
                    [acd setExposureMode:AVCaptureExposureModeAutoExpose];
                    [acd setExposurePointOfInterest:CGPointMake(focus_x, focus_y)];
                }*/

                [acd unlockForConfiguration];
            }
        }
    }
}

A Swift version:

@IBAction func tapToFocus(_ sender: UITapGestureRecognizer) {
    if (sender.state == .ended) {
        let thisFocusPoint = sender.location(in: previewView)

        print("touch to focus ", thisFocusPoint)

        let focus_x = thisFocusPoint.x / previewView.frame.size.width
        let focus_y = thisFocusPoint.y / previewView.frame.size.height

        if (captureDevice!.isFocusModeSupported(.autoFocus) && captureDevice!.isFocusPointOfInterestSupported) {
            do {
                try captureDevice?.lockForConfiguration()
                captureDevice?.focusMode = .autoFocus
                captureDevice?.focusPointOfInterest = CGPoint(x: focus_x, y: focus_y)

                if (captureDevice!.isExposureModeSupported(.autoExpose) && captureDevice!.isExposurePointOfInterestSupported) {
                    captureDevice?.exposureMode = .autoExpose;
                    captureDevice?.exposurePointOfInterest = CGPoint(x: focus_x, y: focus_y);
                 }

                captureDevice?.unlockForConfiguration()
            } catch {
                print(error)
            }
        }
    }
}

So here is how I handle gestures for my AV camera preview. Setup your UITapGestureRecognizer first, then get the point by using captureDevicePointOfInterestForPoint.

- (void)setupGestures
{
  UITapGestureRecognizer *tapToFocusRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self
                                          action:@selector(handleTapToFocusAndExposureRecognizer:)];
  [self addGestureRecognizer:tapToFocusRecognizer];
}

- (void)handleTapToFocusAndExposureRecognizer:(UITapGestureRecognizer*)tabRecognizer {
  CGPoint touchPoint = [tabRecognizer locationInView:self];
  CGPoint point = [self.previewLayer captureDevicePointOfInterestForPoint:touchPoint];
  AVCaptureDevice *device = [self.videoCaptureDeviceInput device];
  NSError *error = nil;

  if (tabRecognizer.state == UIGestureRecognizerStateEnded) {
    if (![device lockForConfiguration:&error]) {
      if (error) {
        RCTLogError(@"%s: %@", __func__, error);
      }
      return;
    }

    [device setFocusPointOfInterest:CGPointMake(point.x, point.y)];
    [device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];

    [device setExposurePointOfInterest:CGPointMake(point.x, point.y)];
    [device setExposureMode:AVCaptureExposureModeContinuousAutoExposure];

    [device unlockForConfiguration];
  }
}

I'm using AVCaptureVideoPreviewLayer to get the touch point. But if you use GLKView to render preview instead of AVCaptureVideoPreviewLayer, you can't get the point directly but have to get the point as previous answer did.

New coder to ios development. Hope this can help.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!