camera-calibration

Easy monocular camera self-calibration algorithm

前提是你 提交于 2020-01-12 06:21:08
问题 I have a video of a road/building and I want to create a 3D model out of it. The scene I am looking at is rigid and the drone is moving. I assume not having any extra info like camera pose, accelerations or GPS position. I would love to find a python implementation that I can adapt to my liking. So far, I have decided to use the OpenCV calcOpticalFlowFarneback() for optical flow, which seems reasonably fast and accurate. With it, I can get the Fundamental Matrix F with findFundamentalMat() .

opencv undistortion image with a strange circle

一世执手 提交于 2020-01-06 04:50:27
问题 I tried to use opencv pinhole model to calculate calibration params and then undistort images. The problem is, there is a strange circle in the undistorted image as below. The code, original and result images are here: Any comment will be appreciated. 回答1: The calibration is a more difficult task than it looks first. I think the main problem is that you show the target in the center of the image only, so the distortion parameters have found this weird parameter optimization. What would be

determine camera rotation and translation matrix from essential matrix

痞子三分冷 提交于 2020-01-04 02:58:34
问题 I am trying to extract rotation matrix and translation matrix from essential matrix. I took these answers as reference: Correct way to extract Translation from Essential Matrix through SVD Extract Translation and Rotation from Fundamental Matrix Now I've done the above steps applying SVD to essential matrix, but here comes the problem. According to my understanding about this subject, both R and T has two answers, which leads to 4 possible solutions of [R|T]. However only one of the solutions

OpenCV camera calibration of an image crop (ROI submatrix)

只愿长相守 提交于 2020-01-01 17:00:13
问题 I have a bit of a problem working with OpenCV's undistort function. I am working with a camera using a wide angle lens. Let's say my access to it is problematic as it is already installed. The problem basically boils down to this: I have successfully measured all the lens parameters and can undistort a full frame image with no problem, the issue is I am actually working in sort of a linescan mode. We're using just a cut out in the middle of the sensor, about 100 px tall. Images for

OpenCV camera calibration of an image crop (ROI submatrix)

时光怂恿深爱的人放手 提交于 2020-01-01 17:00:09
问题 I have a bit of a problem working with OpenCV's undistort function. I am working with a camera using a wide angle lens. Let's say my access to it is problematic as it is already installed. The problem basically boils down to this: I have successfully measured all the lens parameters and can undistort a full frame image with no problem, the issue is I am actually working in sort of a linescan mode. We're using just a cut out in the middle of the sensor, about 100 px tall. Images for

Pixel coordinates to 3D line (opencv)

谁都会走 提交于 2020-01-01 12:01:51
问题 I have an image displayed on screen which is undistorted via cvInitUndistortMap & cvRemap (having done camera calibration), and the user clicks on a feature in the image. So I have the (u,v) pixel coordinates of the feature, and I also have the intrinsic matrix and the distortion matrix. What I'm looking for is the equation of the 3D line in camera/real-world coordinates on which the feature the user clicked must lie. I already have the perpendicular distance between the camera's image plane

How to convert points in depth space to color space in Kinect without using Kinect SDK functions?

烂漫一生 提交于 2020-01-01 01:15:06
问题 I am doing a augmented reality application with 3D objects overlay on top of color video of user. Kinect version 1.7 is used and rendering of virtual objects are done in OpenGL. I have manage to overlay 3D objects on depth video successfully simply by using the intrinsic constants for depth camera from the NuiSensor.h header and compute a projection matrix based on the formula I have found on http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl/. The 3D objects rendered with this

Format of parameters in KITTI's calibration file

与世无争的帅哥 提交于 2019-12-31 05:07:07
问题 I accessed calibration files from part odometry of KITTI, wherein contents of one calibration file are as follows: P0: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 0.000000000000e+00 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00 P1: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 -3.861448000000e+02 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 0

OpenCV OpenNI calibrate kinect

那年仲夏 提交于 2019-12-31 03:03:23
问题 I use home to capture by kinect: capture.retrieve( depthMap, CV_CAP_OPENNI_DEPTH_MAP ) capture.retrieve( bgrImage, CV_CAP_OPENNI_BGR_IMAGE ) Now I don't know if I have to calibrate kinect to have depth pixel value correct. That is, if I take a pixel (u, v) from the image RBG, get the correct value of depth taking the pixels (u, v) from the image depth? depthMap.at<uchar>(u,v) Any help is much appreciated. Thanks! 回答1: You can check if registration is on like so: cout << "REGISTRATION " <<

OpenCV 2.3 camera calibration

回眸只為那壹抹淺笑 提交于 2019-12-30 03:27:29
问题 I'm trying to use OpenCV 2.3 python bindings to calibrate a camera. I've used the data below in matlab and the calibration worked, but I can't seem to get it to work in OpenCV. The camera matrix I setup as an initial guess is very close to the answer calculated from the matlab toolbox. import cv2 import numpy as np obj_points = [[-9.7,3.0,4.5],[-11.1,0.5,3.1],[-8.5,0.9,2.4],[-5.8,4.4,2.7],[-4.8,1.5,0.2],[-6.7,-1.6,-0.4],[-8.7,-3.3,-0.6],[-4.3,-1.2,-2.4],[-12.4,-2.3,0.9], [-14.1,-3.8,-0.6],[