I\'m new in this field and I\'m trying to model a simple scene in 3d out of 2d images and I dont have any info about cameras. I know that there are 3 options:
StereoRectifyUncalibrated calculates simply planar perspective transformation not rectification transformation in object space. It is necessary to convert this planar transformation to object space transformation to extract Q matrice, and i think some of the camera calibration parameters are required for it( like camera intrinsics ). There may have some research topics ongoing with this subject.
You may have add some steps for estimating camera intrinsics, and extracting relative orientation of cameras to make your flow work right. I think camera calibration parameters are vital for extracting proper 3d structure of the scene, if there is no active lighting method is used.
Also bundle block adjustment based solutions are required for refining all estimated values to more accurate values.
I think you need to use StereoRectify to rectify your images and get Q. This function needs two parameters (R and T) the rotation and translation between two cameras. So you can compute the parameters using solvePnP. This function needs some 3d real coordinates of the certain object and 2d points in images and their corresponding points
the procedure looks OK to me .
as far as I know, regarding Image based 3D modelling, cameras are explicitly calibrated or implicitly calibrated. you don't want to explicitly calibrating the camera. you will make use of those things anyway. matching corresponding point pairs are definitely a heavily used approach.