pose-estimation

How to determine world coordinates of a camera?

筅森魡賤 提交于 2019-12-02 17:45:56
I have a rectangular target of known dimensions and location on a wall, and a mobile camera on a robot. As the robot is driving around the room, I need to locate the target and compute the location of the camera and its pose. As a further twist, the camera's elevation and azimuth can be changed using servos. I am able to locate the target using OpenCV, but I am still fuzzy on calculating the camera's position (actually, I've gotten a flat spot on my forehead from banging my head against a wall for the last week). Here is what I am doing: Read in previously computed camera intrinsics file Get

How can I estimate the camera pose with 3d-to-2d-point-correspondences (using opencv)

杀马特。学长 韩版系。学妹 提交于 2019-11-29 03:12:24
问题 Hello my goal is to develop head-tracking functionality to be used in an aircraft (simulator) cockpit , in order to provide AR to suport civilian pilots to land and fly with bad visual conditions. My approach is to detect characteristic points (in the dark simulator LEDs) of which I know the 3D coordinates and than compute the estimated (head worn camera's) pose [R|t] (rotation concatinated with translation). The problem I do have is that the estimated pose seems to be always wrong and a

Camera pose estimation (OpenCV PnP)

只愿长相守 提交于 2019-11-27 19:54:07
I am trying to get a global pose estimate from an image of four fiducials with known global positions using my webcam. I have checked many stackexchange questions and a few papers and I cannot seem to get a a correct solution. The position numbers I do get out are repeatable but in no way linearly proportional to camera movement. FYI I am using C++ OpenCV 2.1. At this link is pictured my coordinate systems and the test data used below. % Input to solvePnP(): imagePoints = [ 481, 831; % [x, y] format 520, 504; 1114, 828; 1106, 507] objectPoints = [0.11, 1.15, 0; % [x, y, z] format 0.11, 1.37, 0

Camera position in world coordinate from cv::solvePnP

♀尐吖头ヾ 提交于 2019-11-26 21:18:47
I have a calibrated camera (intrinsic matrix and distortion coefficients) and I want to know the camera position knowing some 3d points and their corresponding points in the image (2d points). I know that cv::solvePnP could help me, and after reading this and this I understand that I the outputs of solvePnP rvec and tvec are the rotation and translation of the object in camera coordinate system. So I need to find out the camera rotation/translation in the world coordinate system. From the links above it seems that the code is straightforward, in python: found,rvec,tvec = cv2.solvePnP(object_3d

Camera position in world coordinate from cv::solvePnP

白昼怎懂夜的黑 提交于 2019-11-26 07:55:49
问题 I have a calibrated camera (intrinsic matrix and distortion coefficients) and I want to know the camera position knowing some 3d points and their corresponding points in the image (2d points). I know that cv::solvePnP could help me, and after reading this and this I understand that I the outputs of solvePnP rvec and tvec are the rotation and translation of the object in camera coordinate system. So I need to find out the camera rotation/translation in the world coordinate system. From the

Computing x,y coordinate (3D) from image point

故事扮演 提交于 2019-11-26 02:42:19
问题 I have a task to locate an object in 3D coordinate system. Since I have to get almost exact X and Y coordinate, I decided to track one color marker with known Z coordinate that will be placed on the top of the moving object, like the orange ball in this picture: First, I have done the camera calibration to get intrinsic parameters and after that I used cv::solvePnP to get rotation and translation vector like in this following code: std::vector<cv::Point2f> imagePoints; std::vector<cv::Point3f