sift

sift算法原理解析

感情迁移 提交于 2020-01-28 21:41:22
尺度不变特征变换匹配算法详解 Scale Invariant Feature Transform(SIFT) Just For Fun 转自: http://blog.csdn.net/zddblog/article/details/7521424 对于初学者,从 David G.Lowe 的论文到实现,有许多鸿沟,本文帮你跨越。 1 、 SIFT 综述 尺度不变特征转换 (Scale-invariant feature transform 或 SIFT) 是一种电脑视觉的算法用来侦测与描述影像中的局部性特征,它在空间尺度中寻找极值点,并提取出其位置、尺度、旋转不变量,此算法由 David Lowe 在 1999 年所发表, 2004 年完善总结。 其应用范围包含物体辨识、机器人地图感知与导航、影像缝合、 3D 模型建立、手势辨识、影像追踪和动作比对。 此算法有其专利,专利拥有者为英属哥伦比亚大学。 局部影像特征的描述与侦测可以帮助辨识物体, SIFT 特征是基于物体上的一些局部外观的兴趣点而与影像的大小和旋转无关。对于光线、噪声、些微视角改变的容忍度也相当高。基于这些特性,它们是高度显著而且相对容易撷取,在母数庞大的特征数据库中,很容易辨识物体而且鲜有误认。使用 SIFT 特征描述对于部分物体遮蔽的侦测率也相当高,甚至只需要 3 个以上的 SIFT 物体特征就足以计算出位置与方位

SURF description faster with FAST detection?

折月煮酒 提交于 2020-01-22 12:28:53
问题 for my master thesis, i am running some test on the SIFT SURF en FAST algoritms for logo detection on smartphones. when i simply time the detection, description en matching for some methods i get the following results. For a SURF detector and SURF descriptor: 180 keypoints found 1,994 seconds keypoint calculation time (SURF) 4,516 seconds description time (SURF) 0.282 seconds matching time (SURF) when I use a FAST detector in stead of the SURF detector 319 keypoints found 0.023 seconds

MSER+SIFT 图像的特征向量提取

ぐ巨炮叔叔 提交于 2020-01-21 04:42:54
在做图像检索时,需要提取图像的特征向量。传统的局部特征描述子如SIFT、SURF等,如果不做别的处理,往往会得到大量的特征向量,虽然特征向量的数目越多,对图像的描述越精确,检索的准确率较高,但是这也会增加硬件成本同时也会耗费大量的计算时间。 从博主的试验结果来看,单张图384×256大小,提取出的SIFT平均有200个,如果直接和库中的数据进行相似度计算,大概要1分钟的时间。对于时间要求很高的产业,这是不能接受的。所以,在不进行压缩图像损失信息的前提下,大大减少SIFT的数目是很有必要,也是很有价值的。 在查阅了大量的资料后,博主发现在做keypoint的compute之前,用MSER 检测出的keypoint代替SIFT检测出的keypoint,可以大大减少SIFT 的数目。对MSER 有疑问的,可以在找几篇相关的博客看一看,不是很复杂。 简单的说一下MSER(最大稳定值检测),基于分水岭的概念,对图像进行二值化,阈值范围[0,255],然后不断变化阈值,变化量可以自己设置,二值图像就会经历一个从全黑0到全白255的过程,就像水位不断上升时陆地和海平面的俯瞰图。在这个过程中,有些连通区域面积随着阈值的变化量很小或基本不变,这些区域就叫MSER 。关于MSER的算法细节和具体实现就不在这说了,有兴趣的可以自己研究一下。 当用MSER检测出keypoint之后

Getting stuck on Matlab's subplot mechanism for matching images' points for vlfeat

◇◆丶佛笑我妖孽 提交于 2020-01-12 08:12:01
问题 I am doing vlfeat in Matlab and I am following this question here. These below are my simple testing images: Left Image: Right Image: I did a simple test with 2 simple images here (the right image is just rotated version of the left), and I got the result accordingly: It works, but I have one more requirement, which is to match the SIFT points of the two images and show them, like this: I do understand that vl_ubcmatch returns 2 arrays of matched indices, and it is not a problem to map them

Retreiving similar images from a set of images using SIFT/SURF

微笑、不失礼 提交于 2020-01-06 19:24:26
问题 I am working on SIFT features and 'm using a visual bag-of-words approach to make a vocabulary first and then do the matching. I've found similar questions but didn't find the appropriate answer. Same question is asked in below link but there is no satisfactory answer, can anyone help me. Thank u in advance. https://stackoverflow.com/questions/29366944/finding-top-similar-images-from-a-database-using-sift-surf 回答1: Sift and Surf Method are all implemented in lire project and ready to use.

unknown command line flag 'logtostderr'

前提是你 提交于 2020-01-06 13:32:09
问题 I am running this SIFT program at this site: https://github.com/sanchom/sjm All the stuffs go well until I run my program: $ python extract_caltech.py --dataset_path=[path_to_your_101_Categories_directory] \ --process_limit [num_processes_to_spawn] --sift_normalization_threshold 2.0 -- sift_discard_unnormalized \ --sift_grid_type FIXED_3X3 --sift_first_level_smoothing 0.66 --sift_fast --sift_multiscale \ --features_directory [path_for_extracted_features] In the output, I see this line

unknown command line flag 'logtostderr'

梦想与她 提交于 2020-01-06 13:29:16
问题 I am running this SIFT program at this site: https://github.com/sanchom/sjm All the stuffs go well until I run my program: $ python extract_caltech.py --dataset_path=[path_to_your_101_Categories_directory] \ --process_limit [num_processes_to_spawn] --sift_normalization_threshold 2.0 -- sift_discard_unnormalized \ --sift_grid_type FIXED_3X3 --sift_first_level_smoothing 0.66 --sift_fast --sift_multiscale \ --features_directory [path_for_extracted_features] In the output, I see this line

How to get the positions of the matched points with Brute-Force Matching / SIFT Descriptors

隐身守侯 提交于 2020-01-06 07:53:45
问题 I tried matching my SIFT-Keypoints with BF-matcher. I used to do it like this tutorial https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html But if i want to get the x,y-positions with print(good) it gives me only something like DMatch 000001DD9C4E0EB0 How can I convert this into positions? 回答1: As you provided no code, I answer your question based on the code in the tutorial. Basically, keypoints are the points detected by the SIFT algorithm with the

How to get the positions of the matched points with Brute-Force Matching / SIFT Descriptors

半世苍凉 提交于 2020-01-06 07:53:26
问题 I tried matching my SIFT-Keypoints with BF-matcher. I used to do it like this tutorial https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html But if i want to get the x,y-positions with print(good) it gives me only something like DMatch 000001DD9C4E0EB0 How can I convert this into positions? 回答1: As you provided no code, I answer your question based on the code in the tutorial. Basically, keypoints are the points detected by the SIFT algorithm with the

Computational Complexity of SIFT descriptor?

我们两清 提交于 2020-01-03 19:22:12
问题 The SIFT descriptor is a local descriptor that introduced by David Lowe. This descriptor can be splitted up into multiple parts: 1- Constructing a scale space 2- LoG Approximation 3- Finding keypoints 4- Get rid of bad key points 5- Assigning an orientation to the keypoints 6- Generate SIFT features So, my question is: What is the computational complexity of SIFT descriptor? something like O(2n+logn) 回答1: Here's a paper that talks exactly about this. The actual time complexity for a n by n