3d-reconstruction

stereo vision 3d point calculation with known intrinsic and extrinsic matrix

烈酒焚心 提交于 2019-12-04 15:46:43
I have successfully calculated Rotation, Translation with the intrinsic camera matrix of two cameras. I also got rectified images from the left and right cameras. Now, I wonder how I calculate the 3D coordinate of a point, just one point in an image. Here, please see the green points. I have a look at the equation, but it requires baseline which I don't know how to calculate. Could you show me the process of calculating the 3d coordinate of the green point with the given information (R, T, and intrinsic matrix)? FYI 1. I also have a Fundamental matrix and Essential matrix, just in case we need

transforming projection matrices computed from trifocal tensor to estimate 3D points

拜拜、爱过 提交于 2019-12-04 12:10:43
I am using this legacy code: http://fossies.org/dox/opencv-2.4.8/trifocal_8cpp_source.html for estimating 3D points from the given corresponding 2D points from 3 different views. The problem I faced is same as stated here: http://opencv-users.1802565.n2.nabble.com/trifocal-tensor-icvComputeProjectMatrices6Points-icvComputeProjectMatricesNPoints-td2423108.html I could compute Projection matrices successfully using icvComputeProjectMatrices6Points . I used 6 set of corresponding points from 3 views. Results are shown below: projMatr1 P1 = [-0.22742541, 0.054754492, 0.30500898, -0.60233182; -0

3D reconstruction from two calibrated cameras - where is the error in this pipeline?

陌路散爱 提交于 2019-12-04 08:05:37
问题 There are many posts about 3D reconstruction from stereo views of known internal calibration, some of which are excellent. I have read a lot of them, and based on what I have read I am trying to compute my own 3D scene reconstruction with the below pipeline / algorithm. I'll set out the method then ask specific questions at the bottom. 0. Calibrate your cameras: This means retrieve the camera calibration matrices K 1 and K 2 for Camera 1 and Camera 2. These are 3x3 matrices encapsulating each

OpenCV with stereo 3D reconstruction

给你一囗甜甜゛ 提交于 2019-12-03 12:55:32
Say I plan to use OpenCV for 3D reconstruction using a stereo approach...and I do not have any special stereo camera but only webcams. 1.)How do I build a cheap stereo setup using a set of web cams? 2.)Is it possible to snap two images using web cams and convert them to stereo using openCV API? I will use the stereo algorithm from the link below Stereo vision with OpenCV Using this approach I want to create a detailed mapping of an indoor environment. (I would not like to use any projects like Insight3D which cannot be used for commercial purposes without distributing the source code) You can

3D reconstruction from two calibrated cameras - where is the error in this pipeline?

雨燕双飞 提交于 2019-12-02 19:42:18
There are many posts about 3D reconstruction from stereo views of known internal calibration, some of which are excellent . I have read a lot of them, and based on what I have read I am trying to compute my own 3D scene reconstruction with the below pipeline / algorithm. I'll set out the method then ask specific questions at the bottom. 0. Calibrate your cameras: This means retrieve the camera calibration matrices K 1 and K 2 for Camera 1 and Camera 2. These are 3x3 matrices encapsulating each camera's internal parameters: focal length, principal point offset / image centre. These don't change

Basic space carving algorithm

﹥>﹥吖頭↗ 提交于 2019-12-01 12:42:52
I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ? Pseudo code of the algorithm: for every 3D feature point convert it 2D projected coordinates for every 2D feature point cast a ray toward the polygons of the mesh get intersection point if zintersection < z of 3D feature point for ( every triangle vertices ) cull that triangle. Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :) Update code for the

Basic space carving algorithm

一个人想着一个人 提交于 2019-12-01 10:26:50
问题 I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ? Pseudo code of the algorithm: for every 3D feature point convert it 2D projected coordinates for every 2D feature point cast a ray toward the polygons of the mesh get intersection point if zintersection < z of 3D feature point for ( every triangle vertices ) cull that triangle. Here is

Creating OOBB from points

笑着哭i 提交于 2019-11-30 10:36:44
问题 How can I create minimal OOBB for given points? Creating AABB or sphere is very easy, but I have problems creating minimal OOBB. [edit] First answer didn't get me good results. I don't have huge cloud of points. I have little amount of points. I am doing collision geometry generation. For example, cube has 36 points (6 sides, 2 triangles each, 3 points for each triangle). And algorithm from first post gave bad results for cube. Example points for cube: http://nopaste.dk/download/3382 (should

3D reconstruction — How to create 3D model from 2D image?

怎甘沉沦 提交于 2019-11-30 06:11:01
问题 If I take a picture with a camera, so I know the distance from the camera to the object, such as a scale model of a house, I would like to turn this into a 3D model that I can maneuver around so I can comment on different parts of the house. If I sit down and think about taking more than one picture, labeling direction, and distance, I should be able to figure out how to do this, but, I thought I would ask if someone has some paper that may help explain more. What language you explain in

3d model construction using multiple images from multiple points (kinect)

你说的曾经没有我的故事 提交于 2019-11-30 00:47:12
is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also