Error in calculating perspective transform for opencv in Matlab

微笑、不失礼 提交于 2019-12-18 05:14:10

问题


I am trying to recode feature matching and homography using mexopencv .Mexopencv ports OpenCV vision toolbox into Matlab .

My code in Matlab using OpenCV toolbox:

function hello

    close all;clear all;

    disp('Feature matching demo, press key when done');

    boxImage = imread('D:/pic/500_1.jpg');

    boxImage = rgb2gray(boxImage);

    [boxPoints,boxFeatures] = cv.ORB(boxImage);

    sceneImage = imread('D:/pic/100_1.jpg');

    sceneImage = rgb2gray(sceneImage);

    [scenePoints,sceneFeatures] = cv.ORB(sceneImage);

    if (isempty(scenePoints)|| isempty(boxPoints)) 
        return;
    end;


    matcher = cv.DescriptorMatcher('BruteForce');
    matches = matcher.match(boxFeatures,sceneFeatures);


    %Box contains pixels coordinates where there are matches
    box = [boxPoints([matches(2:end).queryIdx]).pt];

    %Scene contains pixels coordinates where there are matches
    scene = [scenePoints([matches(2:end).trainIdx]).pt];

    %Please refer to http://stackoverflow.com/questions/4682927/matlab-using-mat2cell

    %Box arrays contains coordinates the form [ (x1,y1), (x2,y2) ...]
    %after applying mat2cell function
    [nRows, nCols] = size(box);
    nSubCols = 2;
    box = mat2cell(box,nRows,nSubCols.*ones(1,nCols/nSubCols));

    %Scene arrays contains coordinates the form [ (x1,y1), (x2,y2) ...]
    %after applying mat2cell function

    [nRows, nCols] = size(scene);
    nSubCols = 2;
    scene = mat2cell(scene,nRows,nSubCols.*ones(1,nCols/nSubCols));

    %Finding homography between box and scene
    H = cv.findHomography(box,scene);

    boxCorners = [1, 1;...                           % top-left
        size(boxImage, 2), 1;...                 % top-right
        size(boxImage, 2), size(boxImage, 1);... % bottom-right
        1, size(boxImage, 1)];

  %Fine until this point , problem starts with perspectiveTransform   
  sceneCorners= cv.perspectiveTransform(boxCorners,H); 

end

The error:

    Error using cv.perspectiveTransform
Unexpected Standard exception from MEX file.
What()
is:C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\core\src\matmul.cpp:1926:
error: (-215) scn + 1 == m.cols && (depth == CV_32F || depth == CV_64F)

..

Error in hello (line 58)
  sceneCorners= cv.perspectiveTransform(boxCorners,H);

The problem starts from checking out the perspectiveTranform(boxCorners, H), until finding homography it was fine . Also note that while calculating the matching coordinates from the sample and the scene , I indexed from 2:end, box = [boxPoints([matches(2:end).queryIdx]).pt], since accessing the queryIdx of the 1st element would yield the zeroth position that couldn't be accessed . However , I think , this would not be a problem . Anyhow , I am looking forward for an answer to my solution . Thanks.

PS:This is an edited version of my original post here . The solution I received below ,was not adequate enough , and the bug kept recurring .

2nd Update:

According to @Amro , I have updated my code ,below . The inliers gives good response , however the coordinates for calculating perspective transform somehow got twisted.

function hello
    close all; clear all; clc;

    disp('Feature matching with ORB');

    %Feature detector and extractor for object
    imgObj = imread('D:/pic/box.png');
    %boxImage = rgb2gray(boxImage);
    [keyObj,featObj] = cv.ORB(imgObj);

    %Feature detector and extractor for scene
    imgScene = imread('D:/pic/box_in_scene.png');
    %sceneImage = rgb2gray(sceneImage);
    [keyScene,featScene] = cv.ORB(imgScene);

    if (isempty(keyScene)|| isempty(keyObj)) 
        return;
    end;

    matcher = cv.DescriptorMatcher('BruteForce-HammingLUT');
    m = matcher.match(featObj,featScene);

    %im_matches = cv.drawMatches(boxImage, boxPoints, sceneImage, scenePoints,m);

    % extract keypoints from the filtered matches
    % (C zero-based vs. MATLAB one-based indexing)
    ptsObj = cat(1, keyObj([m.queryIdx]+1).pt);
    ptsObj = num2cell(ptsObj, 2);
    ptsScene = cat(1, keyScene([m.trainIdx]+1).pt);
    ptsScene = num2cell(ptsScene, 2);

    % compute homography
    [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

    % remove outliers reported by RANSAC
    inliers = logical(inliers);
    m = m(inliers);

    % show the final matches
    imgMatches = cv.drawMatches(imgObj, keyObj, imgScene, keyScene, m, ...
    'NotDrawSinglePoints',true);
    imshow(imgMatches);

    % apply the homography to the corner points of the box
    [h,w] = size(imgObj);
    corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
    p = cv.perspectiveTransform(corners, H)
    p = permute(p, [2 3 1])
    p = bsxfun(@plus, p, [size(imgObj,2) 0]);

    % draw lines between the transformed corners (the mapped object)
    opts = {'Color',[0 255 0], 'Thickness',4};
    imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
    imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
    imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
    imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
    imshow(imgMatches)
    title('Matches & Object detection')

end

The output is fine , however , the perspectiveTranform is not giving the right coordinates apropos to the problem . My output thus far :

3rd Update:

I have got all of the code running and fine with the homography . However , a corner case is bugging me really hard . If I do imgObj = imread('D:/pic/box.png') and imgScene = imread('D:/pic/box_in_scene.png') , I get the homography rectangle good and fine , however , when I do imgScene = imread('D:/pic/box.png') , i.e the object and scene are the same , I get this error -

Error using cv.findHomography
Unexpected Standard exception from MEX file.
What()
is:C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\calib3d\src\fundam.cpp:1074:
error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() ==
points2.type()

..

Error in hello (line 37)
    [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

Now , I have came across this error in the past , this happens when the number of ptsObj or ptsScene is low , e.g, when the scene is nothing but a white/black screen , keypoints of that scene is zero . In this particular problem there is ample amount of ptsObj and ptsScene. Where can the problem lie . I have tested this code using SURFan the same error is resurfacing .


回答1:


A couple of remarks:

  • the matcher returns zero-based indices (as well as various other functions on account of OpenCV being implemented in C++). So if you want to get the corresponding keypoints you have to adjust by one (MATLAB arrays are one-based). mexopencv intentionally does not automatically adjust for this.

  • The cv.findHomography MEX-function accepts points either as a numeric array of size 1xNx2 (e.g: cat(3, [x1,x2,...], [y1,y2,...])) or as an N-sized cell array of two-element vectors each (i.e {[x1,y1], [x2,y2], ...}). In this case, I'm not sure your code is packing the points correctly, either way it could be made much simpler..

Here is the complete demo translated from C++ to MATLAB:

% input images
imgObj = imread('box.png');
imgScene = imread('box_in_scene.png');

% detect keypoints and calculate descriptors using SURF
detector = cv.FeatureDetector('SURF');
keyObj = detector.detect(imgObj);
keyScene = detector.detect(imgScene);

extractor = cv.DescriptorExtractor('SURF');
featObj = extractor.compute(imgObj, keyObj);
featScene = extractor.compute(imgScene, keyScene);

% match descriptors using FLANN
matcher = cv.DescriptorMatcher('FlannBased');
m = matcher.match(featObj, featScene);

% keep only "good" matches (whose distance is less than k*min_dist )
dist = [m.distance];
m = m(dist < 3*min(dist));

% extract keypoints from the filtered matches
% (C zero-based vs. MATLAB one-based indexing)
ptsObj = cat(1, keyObj([m.queryIdx]+1).pt);
ptsObj = num2cell(ptsObj, 2);
ptsScene = cat(1, keyScene([m.trainIdx]+1).pt);
ptsScene = num2cell(ptsScene, 2);

% compute homography
[H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

% remove outliers reported by RANSAC
inliers = logical(inliers);
m = m(inliers);

% show the final matches
imgMatches = cv.drawMatches(imgObj, keyObj, imgScene, keyScene, m, ...
    'NotDrawSinglePoints',true);
imshow(imgMatches)

% apply the homography to the corner points of the box
[h,w] = size(imgObj);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);
p = bsxfun(@plus, p, [size(imgObj,2) 0]);

% draw lines between the transformed corners (the mapped object)
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)
title('Matches & Object detection')

Now you can try one of the other algorithms for feature detection/extraction (ORB in your case). Just remember you might need to adjust some of the parameters above to get good results (for example the multiplier used to control how many of the keypoint matches to keep).


EDIT:

Like I said, there is no one size fits all solution in computer vision. You need to experiment by adjusting the various algorithm parameters to get good results on your data. For instance, the ORB constructor accepts a bunch of options. Also as the documentation suggests, the brute force matcher with Hamming distances is a recommended matcher for ORB descriptors.

Finally note that I specified RANSAC robust algorithm as method used for computing the homography matrix; Looking at the screenshot you posed, you can see an outlier match incorrectly pointing towards the black computer vision book in the scene. The advantage of the RANSAC method is that it can accurately perform estimation even when there is a large number of outliers in the data. The default method used by findHomography is to use all the points available.

Furthermore note that some of the control points used to estimate the homography in you case are almost co-linear, this might badly contribute to the computation (kind of like how numerically inverting a matrix close to singular is a bad idea).

With the above said, I am highlighting below the relevant parts of the code which gave me good results using ORB descriptors (the rest is unchanged from what I previously posted):

% detect keypoints and calculate descriptors using ORB
[keyObj,featObj] = cv.ORB(imgObj);
[keyScene,featScene] = cv.ORB(imgScene);

% match descriptors using brute force with Hamming distances
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
m = matcher.match(featObj, featScene);

% keep only "good" matches (whose distance is less than k*min_dist )
dist = [m.distance];
m = m(dist < 3*min(dist));

I noticed that you omitted the last part where I filtered the matches by dropping bad ones.. You could always look at the distribution of the "distances" of the matches found, and decide on an appropriate threshold. Here is what I had initially:

hist([m.distance])
title('Distribution of match distances')

You could also apply similar process on the raw keypoints based on their response values, and subsample the points accordingly:

subplot(121), hist([keyObj.response]); title('box')
subplot(122), hist([keyScene.response]); title('scene')

HTH




回答2:


The functions in the Image Processing Toolbox and the Computer Vision System Toolbox use a different convention for transforming points from what you see in most textbooks. In most textbooks, the points are represented in column vectors. So your transformation look like this: H * x, where H is the transformation matrix, and x is a matrix whose columns are the points.

In MATLAB, on the other hand, the points are typically represented as row vectors. So you have to switch the order of the multiplication and transpose H: x' * H'.

Finally, if you have the Computer Vision System Toolobx for MATLAB, you can solve your problem with less code. Check out this example.




回答3:


Try to use the transposition of H.

We compute the homography matrix as: x'=H*x, but in MATLAB, it seems like this type: x'^{T}=x^{T}*H^{T} ( x'^{T} denoted the transposition of x' ). So, transpose your homography and try again.



来源:https://stackoverflow.com/questions/20313863/error-in-calculating-perspective-transform-for-opencv-in-matlab

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!