feature-descriptor

OpenCV - RobustMatcher using findHomography

瘦欲@ 提交于 2020-01-01 07:23:19
问题 I've implement a Robust matcher found on the internet based on differents tests : symmetry test, Ratio Test and RANSAC test. It works well. I used then findHomography in order to have good matches. Here the code : RobustMatcher::RobustMatcher() : ratio(0.65f), refineF(true),confidence(0.99), distance(3.0) { detector = new cv::SurfFeatureDetector(400); //Better than ORB //detector = new cv::SiftFeatureDetector; //Better than ORB //extractor= new cv::OrbDescriptorExtractor(); //extractor= new

KeyPoint descriptor OpenCV

帅比萌擦擦* 提交于 2020-01-01 03:41:34
问题 I am trying to understand how to get the descriptor for a given KeyPoint in OpenCV. So far my code looks like follows: #include <iostream> #include "opencv2/opencv.hpp" typedef cv::Mat Image; int main(int argc, const char * argv[]) { Image imgA = cv::imread("images/buddhamulticam_total100.png", CV_LOAD_IMAGE_GRAYSCALE); Image imgB = cv::imread("images/buddhamulticam_total101.png", CV_LOAD_IMAGE_GRAYSCALE); cv::Ptr<cv::FeatureDetector> detector = cv::FeatureDetector::create("ORB"); cv::Ptr<cv:

How to crete a SIFT's descriptors database

别说谁变了你拦得住时间么 提交于 2019-12-25 04:33:06
问题 How do I create a database of SIFT descriptors (of images)? My intention is to implement a supervisioned training set on Support Vector Machine. 回答1: Which kind of images do you need? If you don`t care, you can just download some public computer vision dataset like http://lear.inrialpes.fr/~jegou/data.php#holidays which offers both images and already computed SIFTs from its regions. Or try other datasets, for instance, from http://www.cvpapers.com/datasets.html Other possibility is just to

Calculate distance between two descriptors

主宰稳场 提交于 2019-12-22 01:20:47
问题 I'm trying to calculate the distance (Euclidean or hamming) between two descriptors already calculated. The problem is I don't want to use a matcher, I just want to calculate the distance between two descriptors. I'm using OpenCV 2.4.9 and i have mine descriptors stored in a Mat type: Mat descriptors1; Mat descriptors2; and now i just want to calculate the distance (preferably the Hamming distance since I'm using binary descriptors) between row1 of descriptors1 and row1 of descriptors2 (for

OpenCV - Use FLANN with ORB descriptors to match features

扶醉桌前 提交于 2019-12-12 11:28:27
问题 I am using OpenCV 3.2 I am trying to use FLANN to match features descriptors in a faster way than brute force. // Ratio to the second neighbor to consider a good match. #define RATIO 0.75 void matchFeatures(const cv::Mat &query, const cv::Mat &target, std::vector<cv::DMatch> &goodMatches) { std::vector<std::vector<cv::DMatch>> matches; cv::Ptr<cv::FlannBasedMatcher> matcher = cv::FlannBasedMatcher::create(); // Find 2 best matches for each descriptor to make later the second neighbor test.

How to create a descriptor matrix in OpenCV

眉间皱痕 提交于 2019-12-12 01:49:43
问题 How do I create a descriptor in OpenCV that can be used with one of the DescriptorMatchers in OpenCV, in the following manner. cv::BFMatcher matcher( cv::NORM_L2, false ); std::vector< cv::DMatch > matches; matcher.match( descriptors_1, descriptors_2, matches ); I already have the following descriptor class. How do I convert or create a new matrix that can be used with a DescriptorMatcher. Preferably BFMatcher. class Descriptor { public: float lxi, lyi; // Location of descriptor vector<double

Re-using descriptors with BOWImgDescriptorExtractor

て烟熏妆下的殇ゞ 提交于 2019-12-11 12:23:22
问题 I have the following code which is intended to cluster a set of images via their SIFT feature descriptors. cv::BOWKMeansTrainer trainer = cv::BOWKMeansTrainer(n_clusters); for (Image* image : get_images()) { trainer.add(image->get_descriptors()); } cv::Mat vocabulary = trainer.cluster(); cv::BOWImgDescriptorExtractor extractor(Image::get_extractor(), Image::get_matcher()); extractor.setVocabulary(vocabulary); for (Image* image : get_images()) { cv::Mat bow_descriptor; extractor.compute(image-

Efficient way for SIFT descriptor matching

陌路散爱 提交于 2019-12-09 12:18:47
问题 There are 2 images A and B. I extract the keypoints (a[i] and b[i]) from them. I wonder how can I determine the matching between a[i] and b[j], efficiently? The obvious method comes to me is to compare each point in A with each point in B. But it over time-consuming for large images databases. How can I just compare point a[i] with just b[k] where k is of small range? I heard that kd-tree may be a good choice, isn't it? Is there any good examples about kd-tree ? Any other suggestions? 回答1: KD

At what stage the training exactly takes place in FlannBasedMatcher in OpenCV?

会有一股神秘感。 提交于 2019-12-08 06:56:37
问题 The following code is in C++ and I am using OpenCV for my experiment. Suppose I am using kd-tree (FlannBasedMatcher) in the following way: //these are inputs to the code snippet below. //They are filled with suitable values Mat& queryDescriptors; vector<Training> &trainCollection; vector< vector<DMatch> >& matches; int knn; //setting flann parameters const Ptr<flann::IndexParams>& indexParams=new flann::KDTreeIndexParams(4); const Ptr<flann::SearchParams>& searchParams=new flann::SearchParams

How to use Mikolajczyk's evaluation framework for feature detectors/descriptors?

十年热恋 提交于 2019-12-06 03:16:02
问题 I'm trying the assess the correctness of my SURF descriptor implementation with the de facto standard framework by Mikolajczyk et. al. I'm using OpenCV to detect and describe SURF features, and use the same feature positions as input to my descriptor implementation. To evaluate descriptor performance, the framework requires to evaluate detector repeatability first. Unfortunately, the repeatability test expects a list of feature positions along with ellipse parameters defining the size and