surf

SURF description faster with FAST detection?

可紊 提交于 2019-12-03 08:12:08
for my master thesis, i am running some test on the SIFT SURF en FAST algoritms for logo detection on smartphones. when i simply time the detection, description en matching for some methods i get the following results. For a SURF detector and SURF descriptor: 180 keypoints found 1,994 seconds keypoint calculation time (SURF) 4,516 seconds description time (SURF) 0.282 seconds matching time (SURF) when I use a FAST detector in stead of the SURF detector 319 keypoints found 0.023 seconds keypoint calculation time (FAST) 1.295 seconds description time (SURF) 0.397 seconds matching time (SURF) The

Drawing rectangle around detected object using SURF

落花浮王杯 提交于 2019-12-03 03:45:36
I am trying to detect an object from the following code involving surf detector, I do not want to draw matches, I want to draw a rectangle around the detected object, but somehow I am unable to get correct Homography, please can anyone point out where I am going wrong. #include <stdio.h> #include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/features2d/features2d.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include "opencv2/calib3d/calib3d.hpp" using namespace cv; int main() { Mat object = imread( "sample.jpeg", CV_LOAD_IMAGE_GRAYSCALE );

OpenCV Combining SURF with Neural Network

房东的猫 提交于 2019-12-03 03:11:58
I want to recognize Vehicles(Cars, Bikes etc.) from a static image. I was thinking of using SURF to get me useful keypoints and descriptors and then train a MLP(Multi Layer Perceptron) Neural Network. However I don't know what will be the input to the Neural Network and what it's output will be so that I can identify which portion of the image a vehicle is located(Probably a rectangle drawn around it). I know that SURF can return useful keypoints in the image along with their descriptors(I have done this). The keypoints have angles and each keypoint corresponds to a 64 or 128 long Vector as

Does anyone have any examples of using OpenCV with python for descriptor extraction?

杀马特。学长 韩版系。学妹 提交于 2019-12-02 22:54:44
I'm trying to use OpenCV to extract SURF descriptors from an image. I'm using OpenCV 2.4 and Python 2.7, but am struggling to find any documentation that provides any information about how to use the functions. I've been able to use the following code to extract features, but I can't find any sensible way to extract descriptors: import cv2 img = cv2.imread("im1.jpg") img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) surf = cv2.FeatureDetector_create('SURF') detector = cv2.GridAdaptedFeatureDetector(surf, 50) # max number of features fs = detector.detect(img2) The code I tried for extracting

Encoding CV_32FC1 Mat data with base64

倾然丶 夕夏残阳落幕 提交于 2019-12-02 06:01:56
问题 Hello I am trying to extract the data from a SURF descriptor, when I try this with an ORB descriptor it works. When I use the SURF one the program quits with a segmentation fault 11 on the base64 encode line, I use the base64 function from this site: Encoding and decoding base64. The exact problem is that the format for the ORB descriptor is CV_8UC1 and the SURF descriptor CV_32FC1 . So I must base64 encode a 32 bit float instead of a 8 bit unsigned char. How can I do this? Mat desc; vector

Converting Mat to Keypoint?

橙三吉。 提交于 2019-12-02 03:41:27
I'm writing both descriptors (SurfDescriptorExtractor output) and keypoints (SurfFeatureDetector output) to an XML file. Before writing keypoints (std::vector) conversion to Mat is done ( following this: convert keypoints to mat or save them to text file opencv ). For descriptors isn't neccesary, they're Mat already. So both are saved as Mat, there's no problem on reading either. But when using a FlannBasedMatcher, and then drawMatches, this method asks for keypoint data. The question is: how would you convert Mat to Keypoint's vector, and which would be the best approach? This is how the

Setting SURF algorithm parameters in OpenCV Android or Java

牧云@^-^@ 提交于 2019-12-01 11:01:26
问题 A question about the object matching in Android-Opencv. As I cannot find any sample code of using SURF in Android platform. I would like to refer to some sample codes in C++. But I have no idea about how to set the threshold value of SURF FeatureDetector in Android. Anyone with experience of Android-Opencv can help ? Thanks a lot..! 回答1: I don't think is possible right now, but there is a workaround I'm using. You have to create a text file containig the parameters and then read the file with

Different SURF Features Extracted Between MATLAB and OpenCV?

跟風遠走 提交于 2019-12-01 06:56:24
I'm implementing an algorithm in OpenCV that I've designed in MATLAB. I'm writing a unit test for the SURF feature extractor in OpenCV, and I want to compare the output of MATLAB's extracted SURF features to OpenCV. This issue is, using the same parameters for both MATLAB and OpenCV extractors I'm getting different numbers of features. How is this possible? Are there different ways to implement SURF? For MATLAB ( http://www.mathworks.com/help/vision/ref/detectsurffeatures.html ) I'm using: MetricThresh: 200 NumOctaves: 3 NumScaleLevels: 4 SURFSize: 64 For OpenCV I'm using: HessianThreshold:

How do I scale the x and y axes in mayavi2?

泪湿孤枕 提交于 2019-11-30 16:57:14
问题 I want to do a 3-d plot with mayavi2 using mayavi.mlab.surf(). This function has an argument called warp_scale that can be used to scale the z axis, I'm looking for something similar but for the x and y axes. I can do this manually by multiplying the x and y arrays and then using the ranges argument in mayavi.mlab.axes() to correct the axes labels, however I'm looking for a more direct approach like that of warp_scale. Thanks! 回答1: when "m" is your surface object: m.actor.actor.scale = (0.1,

How do I scale the x and y axes in mayavi2?

一曲冷凌霜 提交于 2019-11-30 13:27:36
I want to do a 3-d plot with mayavi2 using mayavi.mlab.surf(). This function has an argument called warp_scale that can be used to scale the z axis, I'm looking for something similar but for the x and y axes. I can do this manually by multiplying the x and y arrays and then using the ranges argument in mayavi.mlab.axes() to correct the axes labels, however I'm looking for a more direct approach like that of warp_scale. Thanks! when "m" is your surface object: m.actor.actor.scale = (0.1, 1.0, 1.0) http://osdir.com/ml/python.enthought.devel/2006-11/msg00067.html I was looking for the same