What is the difference between feature detection and descriptor extraction?

前端 未结 2 1716
执念已碎
执念已碎 2020-12-12 11:30

Does anyone know the difference between feature detection and descriptor extraction in OpenCV 2.3?

I understand that the latter is required for matching using Descr

2条回答
  •  时光取名叫无心
    2020-12-12 12:03

    Both, Feature Detection and Feature descriptor extraction are parts of the Feature based image registration. It only makes sense to look at them in the context of the whole feature based image registration process to understand what their job is.

    Feature-based registration algorithm

    The following picture from the PCL documentation shows such a Registation pipeline:

    1. Data acquisition: An input image and a reference image are fed into the algorithm. The images should show the same scene from slightly different viewpoints.

    2. Keypoint estimation (Feature detection): A keypoint (interest point) is a point within the point cloud that has the following characteristics:

      1. it has a clear, preferably mathematically well-founded, definition,
      2. it has a well-defined position in image space,
      3. the local image structure around the interest point is rich in terms of local information contents.
        OpenCV comes with several implementations for Feature detection, such as:
        • AKAZE
        • BRISK
        • ORB
        • KAZE
        • SURF
        • SIFT
        • MSER
        • FAST
        • STAR
        • MSD

      Such salient points in an image are so usefull because the sum of them characterizes the image and helps making different parts of it distinguishable.

    3. Feature descriptors (Descriptor extractor): After detecting keypoints we go on to compute a descriptor for every one of them. "A local descriptor a compact representation of a point’s local neighborhood. In contrast to global descriptors describing a complete object or point cloud, local descriptors try to resemble shape and appearance only in a local neighborhood around a point and thus are very suitable for representing it in terms of matching." (Dirk Holz et al.). OpenCV options:

      • AKAZE
      • BRISK
      • ORB
      • KAZE
      • SURF
      • SIFT
      • FREAK
      • DAISY
      • LATCH
      • LUCID
      • BRIEF
    4. Correspondence Estimation (descriptor matcher): The next task is to find correspondences between the keypoints found in both images.Therefore the extracted features are placed in a structure that can be searched efficiently (such as a kd-tree). Usually it is sufficient to look up all local feature-descriptors and match each one of them to his corresponding counterpart from the other image. However due to the fact that two images from a similar scene don't necessarily have the same number of feature-descriptors as one cloud can have more data then the other, we need to run a seperated correspondence rejection process. OpenCV options:

      • BF
      • FLANN
    5. Correspondence rejection: One of the most common approaches to perform correspondence rejection is to use RANSAC (Random Sample Consensus).

    6. Transformation Estimation: After robust correspondences between the two images are computed an Absolute Orientation Algorithm is used to calculate a transformation matrix which is applied on the input image to match the reference image. There are many different algorithmic approaches to do this, a common approach is: Singular Value Decomposition(SVD).

提交回复
热议问题