Augmented Faces API – How facial landmarks generated?
问题 I'm an IT student, and would like to know (understand) more about the Augmented Faces API in ARCore. I just saw the ARCore V1.7 release, and the new Augmented Faces API. I get the enormous potential of this API. But I didn't see any questions or articles on this subject. So I'm questioning myself, and here are some assumptions / questions which come to my mind about this release. Assumption ARCore team are using (Like Instagram and Snapchat) machine learning, to generate landmarks all over