vision

OpenBR安装与编译过程

故事扮演 提交于 2019-12-03 05:12:53
转载请注明出处: Gaussic 。 原始文档地址: 这是链接 。 在安装的时候碰上几个说大不大说小不小的坑,折腾了很长时间,在此做个总结。 安装VS2013 首先要安装VS2013,官网上说装Express版本就行,我这边装了Professional中文版,除了编译的时候经常出现字符问题,其他没什么影响。 下载并安装 CMake 3.0.2 最好跟它的版本保持一致,以免入坑,下载地址: 这是链接 。 在安装过程中,记得选 Add CMake to PATH ,把CMake添加到环境变量里去。 下载OpenCV 2.4.11 理论上说新一点的版本也是可以的。最好保持一致,官网链接: 这是链接 。 解压到你想要解压的地方,官方放在了C盘,操作方便。 接下来就是编译了,先要打开VS2013 x64兼容工具命令提示,(开始菜单->所有程序->Visual Studio 2013->Visual Studio Tools里面)。 开始敲命令: $ cd C:\opencv-2.4.11 $ mkdir build-msvc2013 $ cd build-msvc2013 $ cmake -G "NMake Makefiles" -DBUILD_PERF_TESTS=OFF -DBUILD_TESTS=OFF -DWITH_FFMPEG=OFF -DCMAKE_BUILD_TYPE

Is number recognition on iPhone possible in real-time?

廉价感情. 提交于 2019-12-02 19:41:29
I need to recognise numbers from the camera image on iPhone, in real-time. I know there will be no more than 5 digits on the image. Is this problem realistic to solve given the computational specifications of the iPhone? Does anyone have any experience using the Tesseract OCR library, and do you think it could be solved by using it? The depends on your definition of "real-time", but yes, it should be possible to do relatively fast recognition of just the digits 0-9 on an iPhone 4, particularly if you can fonts, lighting conditions, etc. that they will appear in. I highly recommend reading the

Using OpenCV to detect parking spots

房东的猫 提交于 2019-12-02 19:40:35
I am trying to use opencv to automatically find and locate all parking spots in an empty parking lot. Currently, I have a code that thresholds the image, applies canny edge detection, and then uses probabilistic hough lines to find the lines that mark each parking spot. The program then draws the lines and the points that make up the lines Here is the code: #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> using namespace cv; using namespace std; int threshold_value = 150; int threshold_type = 0;; int const max_value = 255; int const max_type = 4

OpenCV: Using cvGoodFeaturesToTrack with C++ mat variable

こ雲淡風輕ζ 提交于 2019-12-02 09:21:25
I am trying to use the cvGoodFeatureToTrack function in Visual Studio 2010 with the image type as Mat . Most of the examples I have seen use the IplImage pointer. Right now I have this: int w, h; // video frame size Mat grayFrame; Mat eigImage; Mat tempImage; const int MAX_CORNERS = 10; CvPoint2D32f corners[MAX_CORNERS] = {0}; int corner_count = MAX_CORNERS; double quality_level = 0.1; double min_distance = 10; int eig_block_size = 3; int use_harris = false; w = CurrFrame.size().width; h = CurrFrame.size().height; cvtColor(CurrFrame, grayFrame, CV_BGR2GRAY); cvGoodFeaturesToTrack(&grayFrame,

OpenBr快速入门

心已入冬 提交于 2019-12-01 20:55:46
转载请注明出处: Gaussic 官方翻译加实践,基于Windows版本。 官网地址: 这是链接 。 这篇教程旨在使用一些有趣的例子让你熟悉OpenBR背后的思想、对象以及动机。注意需要摄像头的支持。 OpenBR是一个基于QT、OpenCV和Eigen而构建的C++库。它既可以在命令行使用 br 命令来使用,还可以通过C++或C的API接口来使用。使用 br 命令是最简单也是最快地起步方法,这篇教程中的所有例子都是基于 br 命令的。 首先,确认OpenBR正确地安装。 Windows版本的安装教程: 这是Windows版教程 。 如果是其他版本,请参照官网: 官网 。 官方文档存在一定错误,Windows版本可参照上面的链接。 在终端或命令行输入: $ br -gui -algorithm "Show(false)" -enroll 0.webcam 如果每一步都按照上面进行操作,你的摄像头应该打开了并且开始捕捉视频了。恭喜你,你正在使用OpenBR。注:如果是Windows用户请切换到 openbr\build-msvc2013\install\bin 目录下,也可以把这个目录加到环境变量里面。 现在我们来聊聊上面的命令到底发生了什么。 -gui , -algorithm 和 enroll 是OpenBR的一些flag,它们被用来指定 br 应用的指令操作

How to apply iOS VNImageHomographicAlignmentObservation warpTransform?

懵懂的女人 提交于 2019-12-01 09:24:00
I'm testing Apple's Vision Alignment API and have questions regarding VNHomographicImageRegistrationRequest. Has anyone got it to work? I can get the warpTransform out of it, but I've yet to see a matrix that makes sense, meaning, I'm unable to get a result that warps the image back onto the source image. I'm using Opencv warpPerspective to handle the warping. I'm calling this to get the transform: class func homography(_ cgImage0 : CGImage!, _ cgImage1 : CGImage!, _ orientation : CGImagePropertyOrientation, completion:(matrix_float3x3?)-> ()) { let registrationSequenceReqHandler =

How to apply iOS VNImageHomographicAlignmentObservation warpTransform?

谁说我不能喝 提交于 2019-12-01 06:49:05
问题 I'm testing Apple's Vision Alignment API and have questions regarding VNHomographicImageRegistrationRequest. Has anyone got it to work? I can get the warpTransform out of it, but I've yet to see a matrix that makes sense, meaning, I'm unable to get a result that warps the image back onto the source image. I'm using Opencv warpPerspective to handle the warping. I'm calling this to get the transform: class func homography(_ cgImage0 : CGImage!, _ cgImage1 : CGImage!, _ orientation :

OpenCV 2.3 camera calibration

天大地大妈咪最大 提交于 2019-11-30 09:56:46
I'm trying to use OpenCV 2.3 python bindings to calibrate a camera. I've used the data below in matlab and the calibration worked, but I can't seem to get it to work in OpenCV. The camera matrix I setup as an initial guess is very close to the answer calculated from the matlab toolbox. import cv2 import numpy as np obj_points = [[-9.7,3.0,4.5],[-11.1,0.5,3.1],[-8.5,0.9,2.4],[-5.8,4.4,2.7],[-4.8,1.5,0.2],[-6.7,-1.6,-0.4],[-8.7,-3.3,-0.6],[-4.3,-1.2,-2.4],[-12.4,-2.3,0.9], [-14.1,-3.8,-0.6],[-18.9,2.9,2.9],[-14.6,2.3,4.6],[-16.0,0.8,3.0],[-18.9,-0.1,0.3], [-16.3,-1.7,0.5],[-18.6,-2.7,-2.2]] img

Estimate Brightness of an image Opencv

纵然是瞬间 提交于 2019-11-29 02:18:35
问题 I have been trying to obtain the image brightness in Opencv, and so far I have used calcHist and considered the average of the histogram values. However, I feel this is not accurate, as it does not actually determine the brightness of an image. I performed calcHist over a gray scale version of the image, and tried to differentiate between the avergae values obtained from bright images over that of moderate ones. I have not been successful so far. Could you please help me with a method or

mAP metric in object detection and computer vision

僤鯓⒐⒋嵵緔 提交于 2019-11-28 15:11:19
In computer vision and object detection, the common evaluation method is mAP. What is it and how is it calculated? Jonathan Quotes are from the above mentioned Zisserman paper - 4.2 Evaluation of Results (Page 11) : First an "overlap criterion" is defined as an intersection-over-union greater than 0.5. (e.g. if a predicted box satisfies this criterion with respect to a ground-truth box, it is considered a detection). Then a matching is made between the GT boxes and the predicted boxes using this "greedy" approach: Detections output by a method were assigned to ground truth objects satisfying