I\'m trying to read and make sense of Google ARCore\'s domain model, particularly the Android SDK packages. Currently this SDK is in \"preview\" mode and so there a
A Pose
is a structured transformation. It is a fixed numerical transformation from one coordinate system (typically object local) to another (typically world).
An Anchor
represents a physically fixed location in the world. It's getPose()
will update as the understanding of the world changes. For example, imagine you have a building with a hallway around the outside. If you walk all the way around that hallway, sensor drift results in you not winding up at the same coordinates you started at. However, ARCore can detect (using visual features) that it is in the same space it started it. When this happens, it distorts the world so that your current location and original location line up. As part of this distortion, the location of anchors will be adjusted as well so that they stay in the same physical place.
Because of this distortion, a Pose
relative to the world should be considered valid only for the duration of the frame during which it was returned. As soon as you call update()
the next time, the world may have reshaped at that pose could be useless. If you need to keep a location longer than a frame, create an Anchor
. Just make sure to removeAnchors()
anchors that you're no longer using, as there is ongoing cost for each live anchor.
A Frame
captures the current state at an instant and changes between two calls to update()
.
PointCloud
s are sets of 3D visual feature points detected in the world. They are in their own local coordinate system, which can be accessed from Frame.getPointCloudPose()
. Developers looking to have better spatial understanding than the plane detection provides can try using the point clouds to learn more about the structure of the 3D world.
Does that help?