sift

ndk-build error.opencv2/core/core.hpp: No such file or directory

匿名 (未验证) 提交于 2019-12-03 01:00:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am getting problem in Using OpenCV Nonfree Module in Android. I read this tutorial https://sites.google.com/site/wghsite/technical-notes/sift_surf_opencv_android But after running ndk-build,it shows following errors.. guru@guru-Aspire-5738:~/Android/OpenCVWorkspace/sift_opencv_android/jni$ ~/Android/android-ndk-r9/ndk-build Install : libopencv_java.so => libs/armeabi-v7a/libopencv_java.so Install : libnonfree.so => libs/armeabi-v7a/libnonfree.so Compile++ thumb : test_sift <= test_sift.cpp /home/guru/Android/OpenCVWorkspace/sift_opencv

Why is RANSAC not working for my code?

匿名 (未验证) 提交于 2019-12-03 00:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am trying to find fundamental matrix between 2 images and then transform them using RANSAC. I first use SIFT to detect keypoints and then apply RANSAC: img1 = cv2.imread("im0.png", 0) # queryImage img2 = cv2.imread("im1.png", 0) # trainImage # Initiate SIFT detector sift = sift = cv2.xfeatures2d.SIFT_create() # find the keypoints and descriptors with SIFT kp1, des1 = sift.detectAndCompute(img1, None) kp2, des2 = sift.detectAndCompute(img2, None) src = np.float32([points.pt for points in kp1]).reshape(-1, 1, 2) dst = np.float32([points.pt

Image detection features: SIFT, HISTOGRAM and EGDE

拟墨画扇 提交于 2019-12-03 00:43:49
I am working on developing a object classifier by using 3 different features i.e SIFT, HISTOGRAM and EGDE. However these 3 features have different dimensional vector e.g. SIFT = 128 dimension. HIST = 256. Now these features cannot be concatenated into once vector due to different sizes. What I am planning to do but I am not sure if it is going to be correct way is this: For each features i train the classifier separately and than i apply classification separately for 3 different features and than count the majority and finally declare the image with majority votes. Do you think this is a

SIFT

匿名 (未验证) 提交于 2019-12-03 00:37:01
1. 图像尺度空间 在了解图像特征匹配前,需要清楚,两张照片之所以能匹配得上,是因为其特征点的相似度较高。 “ 图像尺度空间 ”。 “看”一张照片时,会从不同的“尺度”去观测照片,尺度越大,图像越模糊。 “ 尺度 ”就是二维高斯函数当中的σ值 ,一张照片与二维高斯函数卷积后得到很多张不同σ值的高斯图像,这就好比你用人眼从不同距离去观测那张照片。所有不同尺度下的图像,构成单个原始图像的 尺度空间 “ 图像尺度空间表达 ”就是图像在所有尺度下的描述。 尺度是自然客观存在的,不是主观创造的。 高斯卷积只是表现尺度空间的一种形式 。 2. ――高斯卷积 高斯核是唯一可以产生多尺度空间的核。在低通滤波中,高斯平滑滤波无论是时域还是频域都十分有效。我们都知道,高斯函数具有五个重要性质: (1)二维高斯具有旋转对称性; (2)高斯函数是 ; (3)高斯函数的 ; (4) ( ) (5)二维高斯滤波的计算量随滤波模板宽度成 。 L(x,y,σ) , I(x,y) 2 G(x,y,σ) 表达式 “尺度空间表达”,它们有什么关系呢? “尺度空间表达”指的是不同高斯核所平滑后的图片的不同表达 “看”上去的样子就不一样了。高斯核越大,图片“看”上去就越模糊。 那么,图片的模糊与找特征点有关系吗? 计算机没有主观意识去识别哪里是特征点, 它能做的,只是 分辨出变化率最快的点 RGB 0~255 ――降采样

OpenCV3:Star和SIFT特征检测

匿名 (未验证) 提交于 2019-12-03 00:25:02
有些特征用轮廓并无法来检测,比如脸,在内容上显示的特征就不能用边缘检测了,要想把这些关键点检测出来就需要用star检测器, import numpy as np import cv2 as cv image = cv.imread( '/Users/youkechaung/Desktop/算法/数据分析/AI/day02/day02/data/table.jpg' ) #显示内部细节的检测,不是边缘检测而是特征检测, #颜色并不是很重要,因此把图转换为灰度图 cv.imshow( 'Original' ,image) gray = cv.cvtColor(image,cv.COLOR_BGR2GRAY) cv.imshow( 'Gray' ,gray) detector = cv.xfeatures2d.StarDetector_create() #特征检测器 keypoints = detector.detect(gray) #返回关键点问题 # print(keypoints) 是一个矢量,不是很明显表示 cv.drawKeypoints(image,keypoints,image,flags = cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) #画在原图上,第一个是原图,第二个image是目标图,这个flags既包括位置又包括方向

sift 计算机视觉――描述子

匿名 (未验证) 提交于 2019-12-03 00:19:01
描述子实现代码 这里使用开源工具包VLFeat提供的二进制文件来计算图像的SIFT特征。用完整的Python实现SIFT特征的所有步骤可能效率不是很高。VLFeat工具包可以从 http://www.vlfeat.org/ 下载,二进制文件可以在所有主要的平台上运行。VLFeat库是用C语言来写的,但是我们可以使用该库提供的命令行接口。以在Windows 10 64bit平台上为例,下载的文件为vlfeat-0.9.20-bin.tar.gz,解压缩后,将vlfeat-0.9.20/bin/win64文件夹下的 sift.exe 和 vl.dll 代码如下所示: # -*- coding: utf-8 -*- from PIL import Image from pylab import * from numpy import * import os def process_image (imagename, resultname, params= "--edge-thresh 10 --peak-thresh 5" ) : """ 处理一幅图像,然后将结果保存在文件中""" if imagename[- 3 :] != 'pgm' : #创建一个pgm文件 im = Image.open(imagename).convert( 'L' ) im.save( 'tmp.pgm' )

addpath(),genpath()

匿名 (未验证) 提交于 2019-12-03 00:09:02
  clear all;clear clc;   addpath():   打开不在同一目录下的文件   addpath('sparse-coding');%sparse-coding,SIFT均表示路径,此目录下的两个文件夹   addpath('SIFT');   genpath('BP'):   结果产生包含BP,以及BP文件夹下所有的文件夹的路径         来源:博客园 作者: 爽歪歪666 链接:https://www.cnblogs.com/shuangcao/p/11555067.html

opencv 3.4.6报错cv::xfeatures2d::SIFT::create

匿名 (未验证) 提交于 2019-12-02 23:52:01
opencv 3.4.6报错cv::xfeatures2d::SIFT::create 由于opencv 3.4.6 cv::xfeatures2d::SIFT::create算法被申请了专利,调用时会报错 this algorithm is patented and is excluded in this configuration; Set OPENCV_ENABLE_NONFREE CMake option and rebuild the library in function ‘cv::xfeatures2d::SIFT::create’ 将opencv版本退到3.4.2之前即可解决! 文章来源: https://blog.csdn.net/qq_41854650/article/details/97276690

Recognizing an image from a list with OpenCV SIFT using the FLANN matching

独自空忆成欢 提交于 2019-12-02 20:56:52
The point of the application is to recognize an image from an already set list of images. The list of images have had their SIFT descriptors extracted and saved in files. Nothing interesting here: std::vector<cv::KeyPoint> detectedKeypoints; cv::Mat objectDescriptors; // Extract data cv::SIFT sift; sift.detect(image, detectedKeypoints); sift.compute(image, detectedKeypoints, objectDescriptors); // Save the file cv::FileStorage fs(file, cv::FileStorage::WRITE); fs << "descriptors" << objectDescriptors; fs << "keypoints" << detectedKeypoints; fs.release(); Then the device takes a picture. SIFT

How to search the image for an object with SIFT and OpenCV?

佐手、 提交于 2019-12-02 20:23:10
问题 i am working on a simple playing card detecting programme. For now i have a working Sift Algorithmus from here. And i have created some bounding boxes around the cards. Then i used Sift on the card to be searched for and saved the descriptors. But what to do next? Do i have to make a mask of the object and run with it through the bounding box while running Sift in every step? Couldn't find any tutorial on how to do that exactly. Hope someone can help me! Greets Max edit: I want to recognize