How to measure device distance from face with help of ARKit in iOS?

谁说胖子不能爱 提交于 2019-12-08 13:32:32

If you are running an ARFaceTrackingConfiguration (only for devices with a front-facing TrueDepth camera), there are at least two ways to achieve this (I think the second one is the better).

First method

You can use the depthData of the IR camera :

yourARSceneView.session.currentFrame?.capturedDepthData?.depthDataMap

This will return a CVPixelBuffer of size 640x360 containing depth data for each pixel (basically the distance between the IR camera and the real objects in the world). You can access CVPixelBuffer data through available extensions like this one. The depth data are expressed in meters. Once you have the depth data, you will have to choose or detect which ones are part of the user's face. You also have to be careful that "the depth-sensing camera provides data at a different frame rate than the color camera, so this property’s value can also be nil if no depth data was captured at the same time as the current color image". For more informations : AVDepthData

Second method (recommended)

Another way to get the distance between the device and the user's face is to convert position of the detected user's face into camera's coordinate system. To do this, you will have to use the convertPosition method from SceneKit in order to switch coordinate space, from face coordinate space to camera coordinate space.

let positionInCameraSpace = theFaceNode.convertPosition(pointInFaceCoordinateSpace, to: yourARSceneView.pointOfView)

theFaceNode is the SCNNode created by ARKit representing the user's face. The pointOfView property of your ARSCNView returns the node from which the scene is viewed, basically the camera. pointInFaceCoordinateSpace could be any vertices of the face mesh or just the position of theFaceNode (which is the origin of the face coordinate system). Here, positionInCameraSpace is a SCNVector3, representing the position of the point you gave, in camera coordinate space. Then you can get the distance between the point and the camera using the x,y and z value of this SCNVector3 (expressed in meters).

I guess the second method is better as it looks more precise and you can choose precisely which point of the face you want to measure. You can also use transforms as Rom4in said (I guess the convertPosition method uses transforms). Hope it will help and I'm also curious to know if there are easier ways to achieve this.

Both the camera and the face have transforms, you can then calculate the distance between them.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!