homography

Opencv homography to find global xy coordinates from pixel xy coordinates

不想你离开。 提交于 2019-11-29 05:13:51
I am trying to find the transformation matrix H so that i can multiply the (x,y) pixel coordinates and get the (x,y) real world coordinates. Here is my code: import cv2 import numpy as np from numpy.linalg import inv if __name__ == '__main__' : D=[159.1,34.2] I=[497.3,37.5] G=[639.3,479.7] A=[0,478.2] # Read source image. im_src = cv2.imread('/home/vivek/june_14.png') # Four corners of the book in source image pts_src = np.array([D,I,G,A]) # Read destination image. im_dst = cv2.imread('/home/vivek/june_14.png') # Four corners of the book in destination image. print "img1 shape:",im_dst.shape

Multiple camera image stitching

北城余情 提交于 2019-11-29 03:43:45
问题 I've been running a project of stitching images from multiple cameras, but I think I've got a bottleneck...I have some questions about this issue. I wanna try to mount them on a vehicle in the future and that means the relative positions and orientations of cameras are FIXED. Also, as I'm using multiple cameras and try to stitch images from them using HOMOGRAPHY, I'll put cameras as close as possible so that the errors(due to the fact that the foci of the cameras are not at the same position

OpenCV 2.4.3 - warpPerspective with reversed homography on a cropped image

我的梦境 提交于 2019-11-29 00:38:10
When finding a reference image in a scene using SURF, I would like to crop the found object in the scene, and "straighten" it back using warpPerspective and the reversed homography matrix. Meaning, let's say I have this SURF result: Now, I would like to crop the found object in the scene: and "straighten" only the cropped image with warpPerspective using the reversed homography matrix. The result I'm aiming at is that I'll get an image containing, roughly, only the object, and some distorted leftovers from the original scene (as the cropping is not a 100% the object alone). Cropping the found

Camera pose estimation from homography or with solvePnP() function

↘锁芯ラ 提交于 2019-11-28 23:35:32
I'm trying to build static augmented reality scene over a photo with 4 defined correspondences between coplanar points on a plane and image. Here is a step by step flow: User adds an image using device's camera. Let's assume it contains a rectangle captured with some perspective. User defines physical size of the rectangle, which lies in horizontal plane (YOZ in terms of SceneKit). Let's assume it's center is world's origin (0, 0, 0), so we can easily find (x,y,z) for each corner. User defines uv coordinates in image coordinate system for each corner of the rectangle. SceneKit scene is created

RANSAC Algorithm

落花浮王杯 提交于 2019-11-28 19:19:01
问题 Can anybody please show me how to use RANSAC algorithm to select common feature points in two images which have a certain portion of overlap? The problem came out from feature based image stitching. 回答1: I implemented a image stitcher a couple of years back. The article on RANSAC on Wikipedia describes the general algortihm well. When using RANSAC for feature based image matching, what you want is to find the transform that best transforms the first image to the second image. This would be

How can you tell if a homography matrix is acceptable or not?

时光总嘲笑我的痴心妄想 提交于 2019-11-28 19:05:51
问题 When using OpenCV's findHomography function to estimate an homography between two sets of points, from different images, you will sometimes get a bad homography due to outliers within your input points, even if you use RANSAC or LMEDS. // opencv java example: Mat H = Calib3d.findHomography( src_points, dst_points, Calib3d.RANSAC, 10 ); How can you tell if the resulting 3x3 homography matrix is acceptable or not? I have looked for an answer to this here in Stackoverflow and in Google and was

How do I use the relationships between Flann matches to determine a sensible homography?

我只是一个虾纸丫 提交于 2019-11-28 12:42:53
I have a panorama image, and a smaller image of buildings seen within that panorama image. What I want to do is recognise if the buildings in that smaller image are in that panorama image, and how the 2 images line up. For this first example, I'm using a cropped version of my panorama image, so the pixels are identical. import cv2 import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import math # Load images cwImage = cv2.imread('cw1.jpg',0) panImage = cv2.imread('pan1.jpg',0) # Prepare for SURF image analysis surf = cv2.xfeatures2d.SURF_create(4000) # Find

How to get rotation, translation, shear from a 3x3 Homography matrix in c#

自作多情 提交于 2019-11-28 10:33:27
I calculated the 3x3 homography matrix and I need to get rotation, translation, shear and scale to use them as parameters in the windows8 media element attributes ?! Tom Larkworthy see https://math.stackexchange.com/questions/78137/decomposition-of-a-nonsquare-affine-matrix def getComponents(normalised_homography): '''((translationx, translationy), rotation, (scalex, scaley), shear)''' a = normalised_homography[0,0] b = normalised_homography[0,1] c = normalised_homography[0,2] d = normalised_homography[1,0] e = normalised_homography[1,1] f = normalised_homography[1,2] p = math.sqrt(a*a + b*b)

Warping an image using control points

北城余情 提交于 2019-11-28 09:11:14
I want to convert an image using control points according to this scheme extracted from here : A and B contains the coordinates of the source an target vertices. I am computing the transformation matrix as: A = [51 228; 51 127; 191 127; 191 228]; B = [152 57; 219 191; 62 240; 92 109]; X = imread('rectangle.png'); info = imfinfo('rectangle.png'); T = cp2tform(A,B,'projective'); Up to here it seems to properly work, because (using normalized coordinates) a source vertex produces its target vertex: H = T.tdata.T; > [51 228 1]*H ans = -248.2186 -93.0820 -1.6330 > [51 228 1]*H/ -1.6330 ans = 152

Extract transform and rotation matrices from homography?

点点圈 提交于 2019-11-28 06:37:38
I have 2 consecutive images from a camera and I want to estimate the change in camera pose: I calculate the optical flow: Const MAXFEATURES As Integer = 100 imgA = New Image(Of [Structure].Bgr, Byte)("pic1.bmp") imgB = New Image(Of [Structure].Bgr, Byte)("pic2.bmp") grayA = imgA.Convert(Of Gray, Byte)() grayB = imgB.Convert(Of Gray, Byte)() imagesize = cvGetSize(grayA) pyrBufferA = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _ (imagesize.Width + 8, imagesize.Height / 3) pyrBufferB = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _ (imagesize.Width + 8, imagesize.Height / 3) features