Proper usage of CIDetectorTracking

一笑奈何 提交于 2019-11-28 06:56:25

You're right, this key is not very well documented. Beside the API docs it is also not explained in:

I tried different values for CIDetectorTracking and the only accepted values seem to be @(YES) and @(NO). With other values it prints this message in the console:

Unknown CIDetectorTracking specified. Ignoring.

When you set the value to @(YES) you should get tracking id's with the detected face features.


However when you want to detect faces in content captured from the camera you should prefer the face detection API in AVFoundation. It has face tracking built in and the face detection happens in the background on the GPU and will be much faster than CoreImage face detection It requires iOS 6 and at least an iPhone 4S or iPad 2.

The face are sent as metadata objects (AVMetadataFaceObject) to the AVCaptureMetadataOutputObjectsDelegate.

You can use this code (taken from StacheCam 2 and the slides of the WWDC session mentioned above) to setup face detection and get face metadata objects:

- (void) setupAVFoundationFaceDetection
{       
    self.metadataOutput = [AVCaptureMetadataOutput new];
    if ( ! [self.session canAddOutput:self.metadataOutput] ) {
        return;
    }

    // Metadata processing will be fast, and mostly updating UI which should be done on the main thread
    // So just use the main dispatch queue instead of creating a separate one
    // (compare this to the expensive CoreImage face detection, done on a separate queue)
    [self.metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    [self.session addOutput:self.metadataOutput];

    if ( ! [self.metadataOutput.availableMetadataObjectTypes containsObject:AVMetadataObjectTypeFace] ) {
        // face detection isn't supported (via AV Foundation), fall back to CoreImage
        return;
    }

    // We only want faces, if we don't set this we would detect everything available
    // (some objects may be expensive to detect, so best form is to select only what you need)
    self.metadataOutput.metadataObjectTypes = @[ AVMetadataObjectTypeFace ];

}

// AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
         didOutputMetadataObjects:(NSArray *)metadataObjects
         fromConnection:(AVCaptureConnection *)c
{
   for ( AVMetadataObject *object in metadataObjects ) {
     if ( [[object type] isEqual:AVMetadataObjectTypeFace] ) {
      AVMetadataFaceObject* face = (AVMetadataFaceObject*)object;
      CMTime timestamp = [face time];
      CGRect faceRectangle = [face bounds];
      NSInteger faceID = [face faceID];
      CGFloat rollAngle = [face rollAngle];
      CGFloat yawAngle = [face yawAngle];
      NSNumber* faceID = @(face.faceID); // use this id for tracking
      // Do interesting things with this face
     }
}

If you want to display the face frames in the preview layer you need to get the transformed face object:

AVMetadataFaceObject * adjusted = (AVMetadataFaceObject*)[self.previewLayer transformedMetadataObjectForMetadataObject:face];

For details check out the sample code from WWDC 2012.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!