Projecting the ARKit face tracking 3D mesh to 2D image coordinates

时光怂恿深爱的人放手 提交于 2020-07-15 08:42:26

问题


I am collecting face mesh 3D vertices using ARKit. I have read: Mapping image onto 3D face mesh and Tracking and Visualizing Faces.


I have the following struct:

 struct CaptureData {
        var vertices: [SIMD3<Float>]
        var verticesformatted: String {
            let verticesDescribed = vertices.map({ "\($0.x):\($0.y):\($0.z)" }).joined(separator: "~")
            return "<\(verticesDescribed)>"
        }
    }

I have a Strat button to capture vertices:

@IBAction private func startPressed() {
    captureData = [] // Clear data
    currentCaptureFrame = 0 //inital capture frame
    fpsTimer = Timer.scheduledTimer(withTimeInterval: 1/fps, repeats: true, block: {(timer) -> Void in self.recordData()})
}

 private var fpsTimer = Timer()
    private var captureData: [CaptureData] = [CaptureData]()
    private var currentCaptureFrame = 0

And a stop button to stop capturing (save the data):

 @IBAction private func stopPressed() {
        do {
            fpsTimer.invalidate() //turn off the timer
            let capturedData = captureData.map{$0.verticesformatted}.joined(separator:"")
            let dir: URL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).last! as URL
            let url = dir.appendingPathComponent("facedata.txt")
            try capturedData.appendLineToURL(fileURL: url as URL)
        }
        catch {
            print("Could not write to file")
        }
    }

Function for recoding data

 private func recordData() {
        guard let data = getFrameData() else { return }
        captureData.append(data)
        currentCaptureFrame += 1
    }

Function for get frame data

private func getFrameData() -> CaptureData? {
    let arFrame = sceneView?.session.currentFrame!
    guard let anchor = arFrame?.anchors[0] as? ARFaceAnchor else {return nil}
    let vertices = anchor.geometry.vertices
    let data = CaptureData(vertices: vertices)
    return data
}

ARSCN extension:

extension ViewController: ARSCNViewDelegate {
    
    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        guard let faceAnchor = anchor as? ARFaceAnchor else { return }
        currentFaceAnchor = faceAnchor
        if node.childNodes.isEmpty, let contentNode = selectedContentController.renderer(renderer, nodeFor: faceAnchor) {
            node.addChildNode(contentNode)
        }
        selectedContentController.session = sceneView?.session
        selectedContentController.sceneView = sceneView
    }
    
    /// - Tag: ARFaceGeometryUpdate
    func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
        guard anchor == currentFaceAnchor,
            let contentNode = selectedContentController.contentNode,
            contentNode.parent == node
            else { return }
        selectedContentController.session = sceneView?.session
        selectedContentController.sceneView = sceneView
        selectedContentController.renderer(renderer, didUpdate: contentNode, for: anchor)
    }
}

I am trying to use the example code from Tracking and Visualizing Faces:

// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;

// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;

// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;

// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;

// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;

My question is how to use the example code above and transform the collected 3D face mesh vertices to 2D image coordinates??

I would like to get the 3D mesh vertices together with their corresponding 2D coordinates.

Currently, I can capture the face mesh points like so: <mesh_x: mesh_ y: mesh_ z:...>

I would to convert my mesh points to the image coordinates and show them together like so:

Expected result: <mesh_x: mesh_ y: mesh_ z:img_x: img_y...>

Any suggestions? Thanks in advance!

来源:https://stackoverflow.com/questions/62783206/projecting-the-arkit-face-tracking-3d-mesh-to-2d-image-coordinates

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!