perspectivecamera

OpenCV: use solvePnP to determine homography

六眼飞鱼酱① 提交于 2019-12-21 05:00:34
问题 Over the past weeks I've tried learning to rectify images and with the help of the people here I've managed to understand it better. About a week ago I set up a test example which I wanted to rectify (view the image from above). This worked fine (original: http://sitedezign.net/original.jpg and rectified: http://sitedezign.net/rectified.jpg) with the function T = cv2.getPerspectiveTransform(UV_cp, XYZ_gcp) where T becomes the Homography. When I tried to do this with real world photos it

The coordinate system of pinhole camera model

限于喜欢 提交于 2019-12-20 09:45:31
问题 Recently, I have been studying the pinhole camera model, but I was confused with the model provided by OpenCV and the " Multiple View geometry in computer vision " textbook. I know that the following photo is a simplified model which switch the position of the image plane and the camera frame. For better illustration and understanding, and taking consideration of the principal point (u0,v0), the relation between two frames is x=f(X/Z)+u0 and y=f(Y/Z)+v0 . However,I was really confused because

Calculating frustum FOV for a PerspectiveCamera

点点圈 提交于 2019-12-19 10:16:57
问题 I currently have a screen consisting of two areas: (Values are just assumed for this particular example and may of course vary depending on screen). The screen in total is 1080x1432px (WxH) and consists of two areas, each clipped using glViewPort . This because I want area (1) not to fill the screen when zooming. Game area. Can be zoomed. The size is 1080x1277px (WxH) and located at the top. The HUD (FYI objects from here can be moved to area (1). Non zoomable. The size is 1080x154 (WxH).

Field of view + Aspect Ratio + View Matrix from Projection Matrix (HMD OST Calibration)

我的梦境 提交于 2019-12-18 06:50:10
问题 I'm currently working on an Augmented reality application. The targetted device being an Optical See-though HMD I need to calibrate its display to achieve a correct registration of virtual objects. I used that implementation of SPAAM for android to do it and the result are precise enough for my purpose. My problem is, calibration application give in output a 4x4 projection matrix I could have directly use with OpenGL for exemple. But, the Augmented Reality framework I use only accept optical

Field of view + Aspect Ratio + View Matrix from Projection Matrix (HMD OST Calibration)

泪湿孤枕 提交于 2019-12-18 06:50:06
问题 I'm currently working on an Augmented reality application. The targetted device being an Optical See-though HMD I need to calibrate its display to achieve a correct registration of virtual objects. I used that implementation of SPAAM for android to do it and the result are precise enough for my purpose. My problem is, calibration application give in output a 4x4 projection matrix I could have directly use with OpenGL for exemple. But, the Augmented Reality framework I use only accept optical

Error in calculating perspective transform for opencv in Matlab

微笑、不失礼 提交于 2019-12-18 05:14:10
问题 I am trying to recode feature matching and homography using mexopencv .Mexopencv ports OpenCV vision toolbox into Matlab . My code in Matlab using OpenCV toolbox: function hello close all;clear all; disp('Feature matching demo, press key when done'); boxImage = imread('D:/pic/500_1.jpg'); boxImage = rgb2gray(boxImage); [boxPoints,boxFeatures] = cv.ORB(boxImage); sceneImage = imread('D:/pic/100_1.jpg'); sceneImage = rgb2gray(sceneImage); [scenePoints,sceneFeatures] = cv.ORB(sceneImage); if

Perspective Projection in Android in an augmented reality application

前提是你 提交于 2019-12-17 16:25:14
问题 Currently I'm writing an augmented reality app and I have some problems to get the objects on my screen. It's very frustrating for me that I'm not able to transform gps-points to the correspending screen-points on my android device. I've read many articles and many other posts on stackoverflow (I've already asked similar questions) but I still need your help. I did the perspective projection which is explained in wikipedia. What do I have to do with the result of the perspective projection to

Moving objects parallel to projection plane in three.js

烈酒焚心 提交于 2019-12-13 15:07:39
问题 I want to move objects along a plane parallel to the projection plane with the mouse. This means during the movement the distance between any picked object and camera projection plane (not camera position) must remain constant. A similar question has been asked: Mouse / Canvas X, Y to Three.js World X, Y, Z , but unlike there I need a solution working for arbitrary camera angles and camera/object positions not only for the plane with z=0. It also has to work for orthographic projection. Now I

How to transform an Android bitmap to wrap a cylinder and change perspective

核能气质少年 提交于 2019-12-13 12:20:14
问题 I wrote a sample app that allows the Android user to take a picture and have the text content from a view as an overlay on the image and saved to a gallery album: What I would like to to is transform the text bitmap before joining the two images. Specifically, I'd like to make the text curve up on the sides (simulating wrapping around a cylinder), and make it larger at the top than the bottom (simulating a top down perspective), as illustrated here: There is no need to interpret the camera

Detecting/correcting Photo Warping via Point Correspondences

萝らか妹 提交于 2019-12-13 06:34:40
问题 I realize there are many cans of worms related to what I'm asking, but I have to start somewhere. Basically, what I'm asking is: Given two photos of a scene, taken with unknown cameras, to what extent can I determine the (relative) warping between the photos? Below are two images of the 1904 World's Fair. They were taken at different levels on the wireless telegraph tower, so the cameras are more or less vertically in line. My goal is to create a model of the area (in Blender, if it matters)