pose-estimation

How to calculate Angle of a square object with respect to image in 2D plane using camera?

喜你入骨 提交于 2021-01-29 07:46:37
问题 I have captured an image using a webcam, attached upside-down above a table horizontally. On the table, I have a square object or piece of card. I have successfully detected the object and found its center coordinates (centroid). Now I want to find the rotation angle of the object with respect to the image. Considering everything in the 2D image plane. How can I calculate the angle? This image represents what I trying to achieve: 回答1: I got the Solution . I write the code to perform the

Reverse of OpenCV projectPoints

穿精又带淫゛_ 提交于 2021-01-27 06:08:06
问题 I have a camera facing the equivalent of a chessboard. I know the world 3d location of the points as well as the 2d location of the corresponding projected points on the camera image. All the world points belong to the same plane. I use solvePnP: Matx33d camMat; Matx41d distCoeffs; Matx31d rvec; Matx31d tvec; std::vector<Point3f> objPoints; std::vector<Point2f> imgPoints; solvePnP(objPoints, imgPoints, camMat, distCoeffs, rvec, tvec); I can then go from the 3d world points to the 2d image

Reverse of OpenCV projectPoints

不想你离开。 提交于 2021-01-27 06:04:29
问题 I have a camera facing the equivalent of a chessboard. I know the world 3d location of the points as well as the 2d location of the corresponding projected points on the camera image. All the world points belong to the same plane. I use solvePnP: Matx33d camMat; Matx41d distCoeffs; Matx31d rvec; Matx31d tvec; std::vector<Point3f> objPoints; std::vector<Point2f> imgPoints; solvePnP(objPoints, imgPoints, camMat, distCoeffs, rvec, tvec); I can then go from the 3d world points to the 2d image

Understanding openCV aruco marker detection/pose estimation in detail: subpixel accuracy

浪子不回头ぞ 提交于 2021-01-05 06:51:30
问题 I am currently studying openCV's 'aruco' module, especially focusing on the poseEstimation of ArUco markers and AprilTags. Looking into the subpixel accuracy, I have encountered a strange behaviour, which is demonstrated by the code below: If I do provide a 'perfect' calibration (e. g. cx/cy equals image center and distortion is set to zero) and a 'perfect' marker with known edge length, cv.detectMarkers will only yield the correct value, if the rotation is at 0/90/180 or 270 degrees. The

Understanding openCV aruco marker detection/pose estimation in detail: subpixel accuracy

自作多情 提交于 2021-01-05 06:50:35
问题 I am currently studying openCV's 'aruco' module, especially focusing on the poseEstimation of ArUco markers and AprilTags. Looking into the subpixel accuracy, I have encountered a strange behaviour, which is demonstrated by the code below: If I do provide a 'perfect' calibration (e. g. cx/cy equals image center and distortion is set to zero) and a 'perfect' marker with known edge length, cv.detectMarkers will only yield the correct value, if the rotation is at 0/90/180 or 270 degrees. The

Mismatch between OpenCV projected points and Unity camera view

廉价感情. 提交于 2020-07-19 04:50:52
问题 We are working on an AR application in which we need to overlay a 3D model of an object on a video stream of the object. A Unity scene contains the 3D model and a camera is filming the 3D object. The camera pose is initially unknown. ▶ What we have tried We did not find a good solution to estimate the camera pose directly in Unity. We, therefore, used OpenCV which provides an extensive library of computer vision functions. In particular, we locate Aruco tags and then pass their matching 3D-2D

Tensorflow: Determine the output stride of a pretrained CNN model

被刻印的时光 ゝ 提交于 2020-02-25 04:16:17
问题 I have downloaded and am implementing a ML application using the Tensorflow Lite Posenet Model. The output of this model is a heatmap, which is a part of CNN's I am new to. One piece of information required to process the output is the "output stride". It is used to calculate the original coordinates of the keypoints found in the original image. keypointPositions = heatmapPositions * outputStride + offsetVectors But the documentation doesn't specify the output stride. Is there information or

Pose estimation: solvePnP and epipolar geometry do not agree

爱⌒轻易说出口 提交于 2020-01-13 06:00:18
问题 I have a relative camera pose estimation problem where I am looking at a scene with differently oriented cameras spaced a certain distance apart. Initially, I am computing the essential matrix using the 5 point algorithm and decomposing it to get the R and t of camera 2 w.r.t camera 1. I thought it would be a good idea to do a check by triangulating the two sets of image points into 3D, and then running solvePnP on the 3D-2D correspondences, but the result I get from solvePnP is way off. I am