sift

How does vl_ubcmatch work technically?

只愿长相守 提交于 2019-12-01 06:06:21
问题 I am reading through vl_ubcmatch's function source code, provided here, and I am trying to understand, how does it compute the score, and how does it work technically internally. However, this C code has these macros, weird ## variables like, and what not, that I don't have experience with. So the main problem here is rather my incompetency in C. If possible, could somebody tell me, how does vl_ubcmatch work exactly? How does it compare two descriptors? 回答1: This is explained in Sections 7.1

How to save Sift feature vector for classification using Neural network

那年仲夏 提交于 2019-12-01 01:53:13
Matlab implementation of SIFT features were found from http://www.cs.ubc.ca/~lowe/keypoints/ . with the help of stackoverflow. I want to save features to a .mat file. Features are roundness, color, no of white pixel count in the binary image and sift features. For the sift features I took descriptors in above code { [siftImage, descriptors, locs] = sift(filteredImg) } So my feature vector now is FeaturesTest = [roundness, nWhite, color, descriptors, outputs]; When saving this to .mat file using save('features.mat','Features'); it gives an error. Error is like this. ??? Error using ==> horzcat

opencv::sift特征提取

夙愿已清 提交于 2019-12-01 01:42:45
SIFT特征检测介绍 SIFT(Scale-Invariant Feature Transform)特征检测关键特性: -建立尺度空间,寻找极值 -关键点定位(寻找关键点准确位置与删除弱边缘) -关键点方向指定 -关键点描述子 关键点定位 我们在像素级别获得了极值点的位置,但是更准确的 值应该在亚像素位置,如何得到 – 这个过程称为关键 点(准确/精准)定位 删除弱边缘- 通过Hassian 矩阵特征值实现,小于阈值 自动舍 建立尺度空间,寻找极值。工作原理 1. 构建图像高斯金字塔,求取DOG,发现最大与最小值在每一级 2. 构建的高斯金字塔,每一层根据sigma的值不同,可以分为几个等级,最少有4 个。 关键点定位 在像素级别获得了极值点的位置,但是更准确的 值应该在亚像素位置,如何得到 – 这个过程称为关键 点(准确/精准)定位。 删除弱边缘- 通过Hassian 矩阵特征值实现,小于阈值 自动舍 关键点方向指定 求得每一层对应图像的梯度,根据给定的窗口大小 计算每个高斯权重,sigma=scalex1.5, 0~360之间建立 36个直方图Bins 找最高峰对应的Bin, 大于max*80% 的都保留 。这样就实现了旋转不变性,提高了匹配时候的稳定性。 大约有15%的关键点会有多个方向。 关键点描述子 拟合多项式插值寻找最大Peak 得到描述子 = 4x4x8=128 cv

How does the SiftDescriptorExtractor from OpenCV convert descriptor values?

时光怂恿深爱的人放手 提交于 2019-11-30 17:18:58
问题 I have a question about the last part of the SiftDescriptorExtractor job, I'm doing the following: SiftDescriptorExtractor extractor; Mat descriptors_object; extractor.compute( img_object, keypoints_object, descriptors_object ); Now I want to check the elements of a descriptors_object Mat object: std::cout<< descriptors_object.row(1) << std::endl; output looks like: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 32, 15, 0, 0, 0, 0, 0, 0, 73, 33, 11, 0, 0, 0, 0, 0, 0, 5,

SIFT

我是研究僧i 提交于 2019-11-30 12:07:58
\section{SIFT} \subsection{SIFT简介\cite{2}} \textbf{尺度不变特征转换(Scale-invariant feature transform或SIFT)},由David Lowe于1999年首次提出,作用是将一幅图像映射为一个局部特征向量集;特征向量具有平移、缩放、旋转不变性,同时对光照变化、仿射及投影变换也有一定的不变性。 \begin{adjustwidth}{1cm}{1cm} SIFT算法的特点有:~\\ 1.SIFT特征是图像的局部特征,其对旋转、尺度缩放、亮度变化保持不变性,对视角变化、仿射变换、噪声也保持一定程度的稳定性;\\ 2.独特性(Distinctiveness)好,信息量丰富,适用于在海量特征数据库中进行快速、准确的匹配;\\ 3.多量性,即使少数的几个物体也可以产生大量的特征向量;\\ 4.高速性,经优化的匹配算法甚至可以达到实时的要求;\\ 5.可扩展性,可以很方便的与其他形式的特征向量进行联合。 \end{adjustwidth} \begin{adjustwidth}{1cm}{1cm} SIFT算法的基本步骤为:~\\ 1.高斯差分(DoG)滤波;\\ 2.尺度空间的极值检测和关键点位置确定;\\ 3.关键点方向确定;\\ 4.构建关键点特征描述符; \end{adjustwidth}

OpenCV-Python dense SIFT

旧街凉风 提交于 2019-11-30 11:37:29
OpenCV has very good documentation on generating SIFT descriptors , but this is a version of "weak SIFT", where the key points are detected by the original Lowe algorithm . The OpenCV example reads something like: img = cv2.imread('home.jpg') gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) sift = cv2.SIFT() kp = sift.detect(gray,None) kp,des = sift.compute(gray,kp) What I'm looking for is strong/dense SIFT, which does not detect keypoints but instead calculates SIFT descriptors for a set of patches (e.g. 16x16 pixels, 8 pixels padding) covering an image as a grid. As I understand it, there are two

How to use SIFT/SURF as features for a machine learning algorithm?

无人久伴 提交于 2019-11-30 07:14:13
Im working on an automatic image annotation problem in which im trying to associate tags with images. For that im trying for SIFT features for learning. But the problem is all the SIFT features are a set of keypoints, each of which have a 2-D array, and the number of keypoints are also huge.How many and how do I give them for my learning algorithm which typically accepts only one-d features? You can represent single SIFT as "visual word" which is one number and use it as SVM input, I think it is what you need. It is usually done by k-means clustering. This method is called "bag-of-words" and

Trying to match two images using sift in OpenCv, but too many matches

只谈情不闲聊 提交于 2019-11-30 06:38:33
问题 I am trying to implement a program which will input two images one is an image of a box alone and one which includes the box in the scene. Basically, the program is supposed to find keypoints in these two images and will show the images with keypoints matched. That is in the end I expect to see an appended image of two input images together with their matched keypoints connected. My code is as follows: #include <opencv2\opencv.hpp> #include <iostream> int main(int argc, const char* argv[]) {

OpenCV SIFT descriptor keypoint radius

允我心安 提交于 2019-11-30 05:32:09
I was digging into OpenCV's implementation of SIFT descriptor extraction . I came upon some puzzling code to get the radius of the interest point neighborhood. Below is the annotated code, with variable names changed to be more descriptive: // keep octave below 256 (255 is 1111 1111) int octave = kpt.octave & 255; // if octave is >= 128, ...???? octave = octave < 128 ? octave : (-128 | octave); // 1/2^absval(octave) float scale = octave >= 0 ? 1.0f/(1 << octave) : (float)(1 << -octave); // multiply the point's radius by the calculated scale float scl = kpt.size * 0.5f * scale; // the constant

OpenCV3.0-图像特征检测

偶尔善良 提交于 2019-11-30 04:02:30
使用opencv的一些内置的算法来实现对图像特征的检测   从图像中提取的到的特征可以用来进行图像的匹配和检索 常用的图像特征检测算法 Harris:检测角点 SIFT:检测斑点 SURF:检测斑点 FAST:检测角点 BRIEF:检测斑点 什么是图像特征?   图像特征就是图像中最具有独特性和具有区别性的图像区域.在图像中特征区域主要分布在角点,高密度区域,边缘(边缘可以将图像分成多个区域),斑点(与周围像素差别很大的区域) cornerHarrir()角点的检测 import cv2 import numpy as np import matplotlib.pyplot as plt img1 = cv2.imread( 'data/aero3.jpg' ,cv2.COLOR_BGR2GRAY) gray = np.float32(gray) # 第二个参数:特征标记点的大小 # 第三个参数:特征检测敏感度大小,介于3-31之间的奇数 dst = cv2.cornerHarris(gray, 3 , 23 , 0.04 ) # 特征点标记为红色 img1[dst> 0.01 *dst.max()] = [ 255 , 0 , 0 ] Harri可以很好的检测角点,而且在旋转的条件下,也可以很好的检测,这跟角点的特性有关. plt.figure(figsize=( 12 , 7