homography

Camera homography

眉间皱痕 提交于 2019-12-03 15:13:53
I am learning camera matrix stuff. I already known that I can get the homography of the camera (3*3 matrix) by using four points in a plane in object space. I want to know if we can get the homagraphy with four points not in a plane? If yes, how can I get the matrix? What formulas should I look at? I also confused homography with another concept: I only need to know three points if I want to convert from points from one coordinate to another coordinate system. So why we need four points in computing homography? Homography maps points 1. On plane to points at another plane 2. Projections of

Drawing rectangle around detected object using SURF

纵然是瞬间 提交于 2019-12-03 14:45:18
问题 I am trying to detect an object from the following code involving surf detector, I do not want to draw matches, I want to draw a rectangle around the detected object, but somehow I am unable to get correct Homography, please can anyone point out where I am going wrong. #include <stdio.h> #include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/features2d/features2d.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include "opencv2/calib3d

Having some difficulty in image stitching using OpenCV

北慕城南 提交于 2019-12-03 09:10:38
I'm currently working on Image stitching using OpenCV 2.3.1 on Visual Studio 2010, but I'm having some trouble. Problem Description I'm trying to write a code for stitching multiple images derived from a few cameras(about 3~4), i,e, the code should keep executing image stitching until I ask it to stop. The following is what I've done so far: (For simplification, I'll replace some part of the code with just a few words) 1.Reading frames(images) from 2 cameras (Currently I'm just working on 2 cameras.) 2.Feature detection, descriptor calculation (SURF) 3.Feature matching using FlannBasedMatcher

Calculating homography matrix using arbitrary known geometrical relations

女生的网名这么多〃 提交于 2019-12-03 08:58:37
I am using OpenCV for an optical measurement system. I need to carry out a perspective transformation between two images, captured by a digital camera. In the field of view of the camera I placed a set of markers (which lie in a common plane), which I use as corresponding points in both images. Using the markers' positions I can calculate the homography matrix. The problem is, that the measured object, whose images I actually want to transform is positioned in a small distance from the markers and in parallel to the markers' plane. I can measure this distance. My question is, how to take that

2D-3D homography matrix estimation

匿名 (未验证) 提交于 2019-12-03 08:46:08
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am working with my Kinect on some 2D 3D image processing. Here is my problem: I have points in 3D (x,y,z) which lie on a plane. I also know the coordinates of the points on the RGB image (x,y). Now I want to estimate a 2D-3D homography matrix to estimate the (x1,y1,z1) coordinates to a random (x1,y1) point. I think that is possible, but I don't know where to start. Thanks! 回答1: What you're looking for is a camera projection matrix , not a homography . A homography maps a plane seen from a camera to the same plane seen from another. For

OpenCV warpperspective

匿名 (未验证) 提交于 2019-12-03 08:44:33
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: For some reason whenever I use OpenCV's warpPerspective() function, the final warped image does not contain everything in the original image. The left part of the image seems to get cut off. I think the reason why this is happening is because the warped image is created at the leftmost position of the canvas for the warpPerspective(). Is there some way to fix this? Thanks 回答1: The problem occurs because the homography maps part of the image to negative x,y values which are outside the image area so cannot be plotted. what we wish to do is to

Using estimateRigidTransform instead of findHomography

只愿长相守 提交于 2019-12-03 08:30:50
The example in the link below is using findHomography to get the transformation between two sets of points. I want to limit the degrees of freedom used in the transformation so want to replace findHomography with estimateRigidTransform . http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography Below I use estimateRigidTransform to get the transformation between the object and scene points. objPoints and scePoints are represented by vector <Point2f> . Mat H = estimateRigidTransform(objPoints, scePoints, false); Following the method used in

Homography and Affine Transformation

微笑、不失礼 提交于 2019-12-03 05:59:12
Hi i am a beginner in computer vision and i wish to know what exactly is the difference between a homography and affine tranformation, if you want to find the translation between two images which one would you use and why?. From papers and definitions I found online, I am yet to find the difference between them and where one is used instead of the other. Thanks for your help. I have set it down in the terms of a layman. Homography A homography, is a matrix that maps a given set of points in one image to the corresponding set of points in another image. The homography is a 3x3 matrix that maps

How to estimate 2D similarity transformation (linear conformal, nonreflective similarity) in OpenCV?

放肆的年华 提交于 2019-12-03 03:52:53
I'm trying to search a specific object in input images by matching SIFT descriptors and finding the transformation matrix by RANSAC. The object can only be modified in scene by similarity transform in 2D space (scaled, rotated, translated), so I need to estimate 2x2 transform matrix instead of 3x3 homography matrix in 3D space. How can I achieve this in OpenCV? You can use estimateRigidTransform (I do not know whether it is RANSAC, the code at http://code.opencv.org/projects/opencv/repository/revisions/2.4.4/entry/modules/video/src/lkpyramid.cpp says RANSAC in its comment), the third parameter

Drawing rectangle around detected object using SURF

落花浮王杯 提交于 2019-12-03 03:45:36
I am trying to detect an object from the following code involving surf detector, I do not want to draw matches, I want to draw a rectangle around the detected object, but somehow I am unable to get correct Homography, please can anyone point out where I am going wrong. #include <stdio.h> #include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/features2d/features2d.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include "opencv2/calib3d/calib3d.hpp" using namespace cv; int main() { Mat object = imread( "sample.jpeg", CV_LOAD_IMAGE_GRAYSCALE );