Swift: Get the TruthDepth camera parameters for face tracking in ARKit

生来就可爱ヽ(ⅴ<●) 提交于 2020-07-27 03:31:34

问题


My goal:

I am trying to get the TruthDepth camera parameters (such as the intrinsic, extrinsic, lens distortion etc) for the TruthDepth camera while I am doing the face tracking. I read that there is examples and possible to that with OpenCV. I am just wondering should one achieve similar goals in Swift.

What I have read and tried:

I read that the apple documentation about ARCamera: intrinsics and AVCameraCalibrationData: extrinsicMatrix and intrinsicMatrix.

However, all I found was just the declarations for both AVCameraCalibrationData and ARCamera:


For AVCameraCalibrationData


For intrinsicMatrix

var intrinsicMatrix: matrix_float3x3 { get }

For extrinsicMatrix

var extrinsicMatrix: matrix_float4x3 { get }

I also read this post: get Camera Calibration Data on iOS and tried Bourne's suggestion:

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
        let ex = photo.depthData?.cameraCalibrationData?.extrinsicMatrix
        //let ex = photo.cameraCalibrationData?.extrinsicMatrix
        let int = photo.cameraCalibrationData?.intrinsicMatrix
        photo.depthData?.cameraCalibrationData?.lensDistortionCenter
        print ("ExtrinsicM: \(String(describing: ex))")
        print("isCameraCalibrationDataDeliverySupported: \(output.isCameraCalibrationDataDeliverySupported)")
    }

But it does not printing the matrix at all.


For ARCamera I have read from Andy Fedoroff's Focal Length of the camera used in RealityKit:

var intrinsics: simd_float3x3 { get }
func inst (){
    sceneView.pointOfView?.camera?.focalLength
    DispatchQueue.main.asyncAfter(deadline: .now() + 2.0) {
        print(" Focal Length: \(String(describing: self.sceneView.pointOfView?.camera?.focalLength))")
        print("Sensor Height: \(String(describing: self.sceneView.pointOfView?.camera?.sensorHeight))")
        // SENSOR HEIGHT IN mm
        let frame = self.sceneView.session.currentFrame
        // INTRINSICS MATRIX
        print("Intrinsics fx: \(String(describing: frame?.camera.intrinsics.columns.0.x))")
        print("Intrinsics fy: \(String(describing: frame?.camera.intrinsics.columns.1.y))")
        print("Intrinsics ox: \(String(describing: frame?.camera.intrinsics.columns.2.x))")
        print("Intrinsics oy: \(String(describing: frame?.camera.intrinsics.columns.2.y))")
    }
}

It shows the render camera parameters:

Focal Length: Optional(20.784610748291016)
Sensor Height: Optional(24.0)
Intrinsics fx: Optional(1277.3052)
Intrinsics fy: Optional(1277.3052)
Intrinsics ox: Optional(720.29443)
Intrinsics oy: Optional(539.8974)

However, this only shows the render camera instead of the TruthDepth camera that I am using for face tracking.


So can anyone help me get started with getting the TruthDepth camera parameters as the documentation did not really show any example other than the declarations?

Thank you so much!


回答1:


The reason why you cannot print the intrinsics is probably because you got nil in the optional chaining. You should have a look at Apple's remark here and here.

Camera calibration data is present only if you specified the isCameraCalibrationDataDeliveryEnabled and isDualCameraDualPhotoDeliveryEnabled settings when requesting capture. For camera calibration data in a capture that includes depth data, see the AVDepthData cameraCalibrationData property.

To request capture of depth data alongside a photo (on supported devices), set the isDepthDataDeliveryEnabled property of your photo settings object to true when requesting photo capture. If you did not request depth data delivery, this property's value is nil.

So if you want to get the intrinsicMatrix and extrinsicMatrix of the TrueDepth camera, you should use builtInTrueDepthCamera as the input device, set the isDepthDataDeliveryEnabled of the pipeline's photo output to true, and set isDepthDataDeliveryEnabled to true when you capture the photo. Then you can access the intrinsic matrices in photoOutput(_: didFinishProcessingPhoto: error:) call back by accessing the depthData.cameraCalibrationData attribute of photo argument.

Here's a code sample for setting up such a pipeline.



来源:https://stackoverflow.com/questions/62927167/swift-get-the-truthdepth-camera-parameters-for-face-tracking-in-arkit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!