iOS11 ARKit: Can ARKit also capture the Texture of the user's face?

前端 未结 3 506
误落风尘
误落风尘 2020-12-08 12:45

I read the whole documentation on all ARKit classes up and down. I don\'t see any place that describes ability to actually get the user face\'s Texture.

ARFaceAn

相关标签:
3条回答
  • 2020-12-08 13:00

    You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:

    • ARFrame.capturedImage gets you the camera image.
    • ARFaceGeometry gets you a 3D mesh of the face.
    • ARAnchor and ARCamera together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.

    So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...

    1. Convert the vertex position from model space to camera space (use the anchor’s transform)
    2. Multiply with the camera projection with that vector to get to normalized image coordinates
    3. Divide by image width/height to get pixel coordinates

    This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView you can probably do this in a shader modifier for the geometry entry point.)

    If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.


    If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.

    0 讨论(0)
  • 2020-12-08 13:07

    No. That information is not currently available in ARKit.

    To detect other facial features, you'll need to run your own custom computer vision code. You can capture images from the front-facing camera using AVFoundation.

    0 讨论(0)
  • 2020-12-08 13:16

    You can calculate the texture coordinates as follows:

    let geometry = faceAnchor.geometry
    let vertices = geometry.vertices
    let size = arFrame.camera.imageResolution
    let camera = arFrame.camera
    
    modelMatrix = faceAnchor.transform
    
    let textureCoordinates = vertices.map { vertex -> vector_float2 in
        let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
        let world_vertex4 = simd_mul(modelMatrix!, vertex4)
        let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
        let pt = camera.projectPoint(world_vector3,
            orientation: .portrait,
            viewportSize: CGSize(
                width: CGFloat(size.height),
                height: CGFloat(size.width)))
        let v = 1.0 - Float(pt.x) / Float(size.height)
        let u = Float(pt.y) / Float(size.width)
        return vector_float2(u, v)
    }
    
    0 讨论(0)
提交回复
热议问题