surf

Measure of accuracy in pattern recognition using SURF in OpenCV

帅比萌擦擦* 提交于 2019-12-04 15:22:14
I’m currently working on pattern recognition using SURF in OpenCV. What do I have so far: I’ve written a program in C# where I can select a source-image and a template which I want to find. After that I transfer both pictures into a C++-dll where I’ve implemented a program using the OpenCV-SURFdetector, which returns all the keypoints and matches back to my C#-program where I try to draw a rectangle around my matches. Now my question: Is there a common measure of accuracy in pattern recognition? Like for example number of matches in proportion to the number of keypoints in the template? Or

How to train OpenCV SVM with BoW Properly

*爱你&永不变心* 提交于 2019-12-04 13:30:09
问题 I can't train the SVM to recognize my object. I'm trying to do this using SURF + Bag Of Words + SVM. My problem is that the classifier does not detect anything. All the results are 0. Here is my code: Ptr<FeatureDetector> detector = FeatureDetector::create("SURF"); Ptr<DescriptorExtractor> descriptors = DescriptorExtractor::create("SURF"); string to_string(const int val) { int i = val; std::string s; std::stringstream out; out << i; s = out.str(); return s; } Mat compute_features(Mat image) {

OpenCV for Android - training SVM with SURF descriptors

时间秒杀一切 提交于 2019-12-04 11:14:31
I need some help in training a SVM for an Android app. I have a set of images in different classes (12 classes) and got all descriptors from them. I managed to get the same amount of descriptors for each image. What I need is to train a SVM for my android application with those descriptors. I'm not sure if I should train it in the Android emulator or write a C++ program to train the SVM and then load it in my app (if I use the OpenCV's lib for windows to train the SVM and then save it, will the lib I'm using for Android recognize the saved SVM file?). I guess I shouldn't train the SVM with

SIFT and SURF feature extraction Implementation using MATLAB

大城市里の小女人 提交于 2019-12-04 10:19:52
I am doing an ancient coins recognition system using matlab. What I have done so far is: convert to grayscale remove noise using Gaussian filter contrast enhancement edge detection using canny edge detector. Now I want to extract feature for classification. Features I thought to select are roundness, area, colour, SIFT and SURF. My problem is how I can apply SIFT and SURF algorithms to my project. I couldn't find built-in functions for both. you can find a matlab implementation of SIFT features here: http://www.cs.ubc.ca/~lowe/keypoints/ You can find SIFT as a C implementation with MATLAB

OpenCV Python: Occasionally get segmentation fault when using FlannBasedMatcher

被刻印的时光 ゝ 提交于 2019-12-04 09:59:19
I'm trying to classify objects using SURF and kNN. The code work well however it occasionally crashes and shows 'Segmentation Fault'. I'm not sure whether I did something wrong but I'm pretty sure that it is corrected. Here is the input file in case that you want to reproduce the issue. Link to download the dataset import numpy as np import cv2 import sys trainfile = ['/home/nuntipat/Documents/Dataset/Bank/Training/15_20_front.jpg' , '/home/nuntipat/Documents/Dataset/Bank/Training/15_50_front.jpg' , '/home/nuntipat/Documents/Dataset/Bank/Training/15_100_front.jpg' , '/home/nuntipat/Documents

How to use Mikolajczyk's evaluation framework for feature detectors/descriptors?

时光怂恿深爱的人放手 提交于 2019-12-04 07:29:56
I'm trying the assess the correctness of my SURF descriptor implementation with the de facto standard framework by Mikolajczyk et. al . I'm using OpenCV to detect and describe SURF features, and use the same feature positions as input to my descriptor implementation. To evaluate descriptor performance, the framework requires to evaluate detector repeatability first. Unfortunately, the repeatability test expects a list of feature positions along with ellipse parameters defining the size and orientation of an image region around each feature. However, OpenCV's SURF detector only provides feature

Bag of Visual Words in Opencv

走远了吗. 提交于 2019-12-03 20:39:33
I am using BOW in opencv for clustering the features of variable size. However one thing is not clear from the documentation of the opencv and also i am unable to find the reason for this question: assume: dictionary size = 100. I use surf to compute the features, and each image has variable size descriptors e.g.: 128 x 34, 128 x 63, etc. Now in BOW each of them are clustered and I get a fixed descriptor size of 128 x 100 for a image. I know 100 is the cluster center created using kmeans clustering. But I am confused in that, if image has 128 x 63 descriptors, than how come it clusters into

Drawing rectangle around detected object using SURF

纵然是瞬间 提交于 2019-12-03 14:45:18
问题 I am trying to detect an object from the following code involving surf detector, I do not want to draw matches, I want to draw a rectangle around the detected object, but somehow I am unable to get correct Homography, please can anyone point out where I am going wrong. #include <stdio.h> #include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/features2d/features2d.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include "opencv2/calib3d

OpenCV Combining SURF with Neural Network

孤街醉人 提交于 2019-12-03 13:00:03
问题 I want to recognize Vehicles(Cars, Bikes etc.) from a static image. I was thinking of using SURF to get me useful keypoints and descriptors and then train a MLP(Multi Layer Perceptron) Neural Network. However I don't know what will be the input to the Neural Network and what it's output will be so that I can identify which portion of the image a vehicle is located(Probably a rectangle drawn around it). I know that SURF can return useful keypoints in the image along with their descriptors(I

Does anyone have any examples of using OpenCV with python for descriptor extraction?

拥有回忆 提交于 2019-12-03 08:52:17
问题 I'm trying to use OpenCV to extract SURF descriptors from an image. I'm using OpenCV 2.4 and Python 2.7, but am struggling to find any documentation that provides any information about how to use the functions. I've been able to use the following code to extract features, but I can't find any sensible way to extract descriptors: import cv2 img = cv2.imread("im1.jpg") img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) surf = cv2.FeatureDetector_create('SURF') detector = cv2