Does anyone know the difference between feature detection and descriptor extraction in OpenCV 2.3?
I understand that the latter is required for matching using Descr
Both, Feature Detection
and Feature descriptor extraction
are parts of the Feature based image registration
. It only makes sense to look at them in the context of the whole feature based image registration process to understand what their job is.
Feature-based registration algorithm
The following picture from the PCL documentation shows such a Registation pipeline:
Data acquisition: An input image and a reference image are fed into the algorithm. The images should show the same scene from slightly different viewpoints.
Keypoint estimation (Feature detection): A keypoint (interest point) is a point within the point cloud that has the following characteristics:
Feature detection
, such as:
Such salient points in an image are so usefull because the sum of them characterizes the image and helps making different parts of it distinguishable.
Feature descriptors (Descriptor extractor): After detecting keypoints we go on to compute a descriptor for every one of them. "A local descriptor a compact representation of a point’s local neighborhood. In contrast to global descriptors describing a complete object or point cloud, local descriptors try to resemble shape and appearance only in a local neighborhood around a point and thus are very suitable for representing it in terms of matching." (Dirk Holz et al.). OpenCV options:
Correspondence Estimation (descriptor matcher): The next task is to find correspondences between the keypoints found in both images.Therefore the extracted features are placed in a structure that can be searched efficiently (such as a kd-tree). Usually it is sufficient to look up all local feature-descriptors and match each one of them to his corresponding counterpart from the other image. However due to the fact that two images from a similar scene don't necessarily have the same number of feature-descriptors as one cloud can have more data then the other, we need to run a seperated correspondence rejection process. OpenCV options:
Correspondence rejection: One of the most common approaches to perform correspondence rejection is to use RANSAC (Random Sample Consensus).
Transformation Estimation: After robust correspondences between the two images are computed an Absolute Orientation Algorithm
is used to calculate a transformation matrix which is applied on the input image to match the reference image. There are many different algorithmic approaches to do this, a common approach is: Singular Value Decomposition(SVD).