matlab-cvst

surface feature detection on image processing

*爱你&永不变心* 提交于 2019-12-25 17:20:10
问题 An example of detectSURFFeatures in comparison of 2 image is in below. I couldn't make detectSURFFeatures function work in my MATLAB. no help or doc detectSURFFeatures gives any clue. the error says " > UncalibratedSterio Undefined function 'detectSURFFeatures' for input arguments of type 'uint8'." but the function itself can cover uint8 as i know. what should i do? %Rectified Sterio Image Uncalibrated % There is no calibration of cameras I1 = rgb2gray(imread('right_me.jpg')); I2 = rgb2gray

Calculating displacement moved in MATLAB

爷,独闯天下 提交于 2019-12-24 10:44:58
问题 I need to compare two or more images to calculate how much a point shifted in the x and y direction. How do I go about doing this in MATLAB? 回答1: What you are looking for is an "Optical Flow" algorithm. There are many around, some faster but less accurate, some slower and more accurate. Click here to find a MATLAB optical flow implementation (Lucas Kanade). 回答2: Gilads suggestion about a Lucas-Kanade tracker/optical flow calculator is really good, and is what I would use. It does however have

Calibrated camera get matched points for 3D reconstruction, ideal test failed

旧时模样 提交于 2019-12-24 04:22:01
问题 I have previously asked the question "Use calibrated camera get matched points for 3D reconstruction", but the problem was not described clearly. So here I use a detail case with every step to show. Hope there is someone can help figure out where my mistake is. At first I made 10 3D points with coordinates: >> X = [0,0,0; -10,0,0; -15,0,0; -13,3,0; 0,6,0; -2,10,0; -13,10,0; 0,13,0; -4,13,0; -8,17,0] these points are on the same plane showing in this picture: My next step is to use the 3D-2D

Error in Fundamental Matrix?

我的未来我决定 提交于 2019-12-24 00:57:41
问题 I am trying to estimate the pose of a camera by scanning two images taken from it, detecting features in the images, matching them, creating the fundamental matrix, using the camera intrinsics to calculate the essential matrix and then decompose it to find the Rotation and Translation. Here is the matlab code: I1 = rgb2gray(imread('1.png')); I2 = rgb2gray(imread('2.png')); points1 = detectSURFFeatures(I1); points2 = detectSURFFeatures(I2); points1 = points1.selectStrongest(40); points2 =

Not getting what 'spatial weights' for HOG are

你离开我真会死。 提交于 2019-12-23 21:31:23
问题 I am using HOG for sunflower detection. I understand most of what HOG is doing now, but have some things that I do not understand in the final stages. (I am going through the MATLAB code from Mathworks). Let us assume we are using the Dalal-Triggs implementation. (That is, 8x8 pixels make 1 cell, 2x2 cells make 1 block, blocks are taken at 50% overlap in both directions, and lastly, that we have quantized the histograms into 9 bins, unsigned. (meaning, from 0 to 180 degrees)). Finally, our

generate a point cloud from a given depth image-matlab Computer Vision System Toolbox

我只是一个虾纸丫 提交于 2019-12-22 12:18:46
问题 I am a beginner in matlab, I have purchased Computer Vision System Toolbox. I have being given 400 of depth images (.PNG images). I would like to create a point cloud for each image. I looked at the documentation of Computer Vision System Toolbox, and there is an example of converting depth image to point cloud (http://uk.mathworks.com/help/vision/ref/depthtopointcloud.html): [xyzPoints,flippedDepthImage] = depthToPointCloud(depthImage,depthDevice) depthDevice = imaq.VideoDevice('kinect',2)

silhouette extraction from depth

牧云@^-^@ 提交于 2019-12-22 10:57:55
问题 Hello I have a depth image, I want to extract the person(human) silhouette from that. I used pixel thresholding like this: for i=1:240 for j=1:320 if b(i,j)>2400 || b(i,j)<1900 c(i,j)=5000; else c(i,j)=b(i,j); end end end but there is some part left. Is there any way to remove that? Original_image: Extracted_silhouette: 回答1: According to this thread depth map boundaries can be found based on the direction of estimated surface normals. To estimate the direction of the surface normals, you can

Correct lens distortion using single calibration image in Matlab

女生的网名这么多〃 提交于 2019-12-22 09:59:30
问题 I would like to correct lens distortions on a series of images. All the images were captured with the camera fixed in place, and a checkerboard image from the same set up is also available. After detecting the corners of the distorted checkerboard image, I would like to compute the radial distortion coefficients so that I can correct the images. Similar to the estimateCameraParameters function. Ideally, I would like to use a method similar to Matlab camera calibration however this does not

How to generate a 3D point cloud from depth image and color image acquired from Matlab

不羁的心 提交于 2019-12-21 17:57:36
问题 I have 2 set data acquired from kinect 1- depth image with size 480*640 (uint16) from a scene 2- color image with same size (480*640*3 single) from same scene The question is how can I merge these data together to generate a colored 3D point clouds with PLY format in Matlab. I need to say that unfortunately I don't have an access to kinect anymore and i should use only these data. 回答1: I've never tried to do that in matlab, but i think that this is what you are looking for: http://es

Rotation and Translation from Essential Matrix incorrect

泪湿孤枕 提交于 2019-12-21 04:58:24
问题 I currently have a stereo camera setup. I have calibrated both cameras and have the intrinsic matrix for both cameras K1 and K2 . K1 = [2297.311, 0, 319.498; 0, 2297.313, 239.499; 0, 0, 1]; K2 = [2297.304, 0, 319.508; 0, 2297.301, 239.514; 0, 0, 1]; I have also determined the Fundamental matrix F between the two cameras using findFundamentalMat() from OpenCV. I have tested the Epipolar constraint using a pair of corresponding points x1 and x2 (in pixel coordinates) and it is very close to 0 .