feature-descriptor

Calculate distance between two descriptors

你离开我真会死。 提交于 2019-12-04 21:36:17
I'm trying to calculate the distance (Euclidean or hamming) between two descriptors already calculated. The problem is I don't want to use a matcher, I just want to calculate the distance between two descriptors. I'm using OpenCV 2.4.9 and i have mine descriptors stored in a Mat type: Mat descriptors1; Mat descriptors2; and now i just want to calculate the distance (preferably the Hamming distance since I'm using binary descriptors) between row1 of descriptors1 and row1 of descriptors2 (for example). I have tried to use bitwise_xor() function but then I got not an effective way of doing the

How to use Mikolajczyk's evaluation framework for feature detectors/descriptors?

时光怂恿深爱的人放手 提交于 2019-12-04 07:29:56
I'm trying the assess the correctness of my SURF descriptor implementation with the de facto standard framework by Mikolajczyk et. al . I'm using OpenCV to detect and describe SURF features, and use the same feature positions as input to my descriptor implementation. To evaluate descriptor performance, the framework requires to evaluate detector repeatability first. Unfortunately, the repeatability test expects a list of feature positions along with ellipse parameters defining the size and orientation of an image region around each feature. However, OpenCV's SURF detector only provides feature

OpenCV - RobustMatcher using findHomography

好久不见. 提交于 2019-12-03 21:54:41
I've implement a Robust matcher found on the internet based on differents tests : symmetry test, Ratio Test and RANSAC test. It works well. I used then findHomography in order to have good matches. Here the code : RobustMatcher::RobustMatcher() : ratio(0.65f), refineF(true),confidence(0.99), distance(3.0) { detector = new cv::SurfFeatureDetector(400); //Better than ORB //detector = new cv::SiftFeatureDetector; //Better than ORB //extractor= new cv::OrbDescriptorExtractor(); //extractor= new cv::SiftDescriptorExtractor; extractor= new cv::SurfDescriptorExtractor; // matcher= new cv:

KeyPoint descriptor OpenCV

跟風遠走 提交于 2019-12-03 08:58:32
I am trying to understand how to get the descriptor for a given KeyPoint in OpenCV. So far my code looks like follows: #include <iostream> #include "opencv2/opencv.hpp" typedef cv::Mat Image; int main(int argc, const char * argv[]) { Image imgA = cv::imread("images/buddhamulticam_total100.png", CV_LOAD_IMAGE_GRAYSCALE); Image imgB = cv::imread("images/buddhamulticam_total101.png", CV_LOAD_IMAGE_GRAYSCALE); cv::Ptr<cv::FeatureDetector> detector = cv::FeatureDetector::create("ORB"); cv::Ptr<cv::DescriptorExtractor> descriptor = cv::DescriptorExtractor::create("ORB"); std::vector<cv::KeyPoint>

OpenCV ORB descriptor - how exactly is it stored in a set of bytes?

≡放荡痞女 提交于 2019-12-03 05:55:09
问题 I'm currently using OpenCV's ORB features extractor and I did notice the strange (at least for me) way the ORB-descriptor is stored (it is basically a BRIEF-32 with a modification that is not relevant to my question). As some of you know ORB takes the keypoints extracted using a modified FAST-9 (circle radius = 9 pixels; also stores orientation of the keypoint) and uses those with a modified BRIEF-32 descriptor to store the feature that the keypoint represents. BRIEF (ORB version) works as

OpenCV FREAK: Fast Retina KeyPoint descriptor

a 夏天 提交于 2019-12-03 05:06:21
问题 I am developing an application which involves the use of Freak descriptors, just released in the OpenCV2.4.2 version. In the documentation only two functions appear: The class constructor A confusing method selectPairs() I want to use my own detector and then call the FREAK descriptor passing the keypoints detected but I don't understand clearly how the class works. Question: Do I strictly need to use selectPairs() ? Is it enough just by calling FREAK.compute() ? I don't really understand

OpenCV ORB descriptor - how exactly is it stored in a set of bytes?

有些话、适合烂在心里 提交于 2019-12-02 20:34:17
I'm currently using OpenCV's ORB features extractor and I did notice the strange (at least for me) way the ORB-descriptor is stored (it is basically a BRIEF-32 with a modification that is not relevant to my question). As some of you know ORB takes the keypoints extracted using a modified FAST-9 (circle radius = 9 pixels; also stores orientation of the keypoint) and uses those with a modified BRIEF-32 descriptor to store the feature that the keypoint represents. BRIEF (ORB version) works as follows: we take a 31x31 pixels patch (represents a feature) and create a bunch of random 5x5 pixels test

Encoding CV_32FC1 Mat data with base64

倾然丶 夕夏残阳落幕 提交于 2019-12-02 06:01:56
问题 Hello I am trying to extract the data from a SURF descriptor, when I try this with an ORB descriptor it works. When I use the SURF one the program quits with a segmentation fault 11 on the base64 encode line, I use the base64 function from this site: Encoding and decoding base64. The exact problem is that the format for the ORB descriptor is CV_8UC1 and the SURF descriptor CV_32FC1 . So I must base64 encode a 32 bit float instead of a 8 bit unsigned char. How can I do this? Mat desc; vector

How does the SiftDescriptorExtractor from OpenCV convert descriptor values?

时光怂恿深爱的人放手 提交于 2019-11-30 17:18:58
问题 I have a question about the last part of the SiftDescriptorExtractor job, I'm doing the following: SiftDescriptorExtractor extractor; Mat descriptors_object; extractor.compute( img_object, keypoints_object, descriptors_object ); Now I want to check the elements of a descriptors_object Mat object: std::cout<< descriptors_object.row(1) << std::endl; output looks like: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 32, 15, 0, 0, 0, 0, 0, 0, 73, 33, 11, 0, 0, 0, 0, 0, 0, 5,