opencv - counting non directional edges from canny

荒凉一梦 提交于 2019-11-29 23:24:15

I think you are confusing edge detection with gradient detection. Canny provides an edge map based on the gradient magnitude (normally using a Sobel operator, but it can use others) because Canny only returns the thresholded gradient magnitude information it cannot provide you with the orientation information.

EDIT : I should clarify that the Canny algorithm does use gradient orientation for the non-maximum suppression step. However, the OpenCV implementation of Canny hides this orientation information from you, and only returns an edge magnitude map.

The basic algorithm to get magnitude and orientation of the gradient is as follows:

  1. Compute Sobel in the X direction (Sx).
  2. Compute Sobel in the Y direction (Sy).
  3. Compute the gradient magnitude sqrt(Sx*Sx + Sy*Sy).
  4. Compute the gradient orientation with arctan(Sy / Sx).

This algorithm can be implemented using the following OpenCV functions: Sobel, magnitude, and phase.

Below is a sample that computes the gradient magnitude and phase as well as shows a coarse color mapping of the gradient orientations:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include <iostream>
#include <vector>

using namespace cv;
using namespace std;

Mat mat2gray(const cv::Mat& src)
{
    Mat dst;
    normalize(src, dst, 0.0, 255.0, cv::NORM_MINMAX, CV_8U);

    return dst;
}

Mat orientationMap(const cv::Mat& mag, const cv::Mat& ori, double thresh = 1.0)
{
    Mat oriMap = Mat::zeros(ori.size(), CV_8UC3);
    Vec3b red(0, 0, 255);
    Vec3b cyan(255, 255, 0);
    Vec3b green(0, 255, 0);
    Vec3b yellow(0, 255, 255);
    for(int i = 0; i < mag.rows*mag.cols; i++)
    {
        float* magPixel = reinterpret_cast<float*>(mag.data + i*sizeof(float));
        if(*magPixel > thresh)
        {
            float* oriPixel = reinterpret_cast<float*>(ori.data + i*sizeof(float));
            Vec3b* mapPixel = reinterpret_cast<Vec3b*>(oriMap.data + i*3*sizeof(char));
            if(*oriPixel < 90.0)
                *mapPixel = red;
            else if(*oriPixel >= 90.0 && *oriPixel < 180.0)
                *mapPixel = cyan;
            else if(*oriPixel >= 180.0 && *oriPixel < 270.0)
                *mapPixel = green;
            else if(*oriPixel >= 270.0 && *oriPixel < 360.0)
                *mapPixel = yellow;
        }
    }

    return oriMap;
}

int main(int argc, char* argv[])
{
    Mat image = Mat::zeros(Size(320, 240), CV_8UC1);
    circle(image, Point(160, 120), 80, Scalar(255, 255, 255), -1, CV_AA);

    imshow("original", image);

    Mat Sx;
    Sobel(image, Sx, CV_32F, 1, 0, 3);

    Mat Sy;
    Sobel(image, Sy, CV_32F, 0, 1, 3);

    Mat mag, ori;
    magnitude(Sx, Sy, mag);
    phase(Sx, Sy, ori, true);

    Mat oriMap = orientationMap(mag, ori, 1.0);

    imshow("magnitude", mat2gray(mag));
    imshow("orientation", mat2gray(ori));
    imshow("orientation map", oriMap);
    waitKey();

    return 0;
}

Using a circle image:

This results in the following magnitude and orientation images:

Finally, here is the gradient orientation map:

UPDATE : Abid actually asked a great question in the comments "what is meant by orientation here?", which I thought needed some further discussion. I am assuming that the phase function doesn't switch coordinate frames from the normal image processing standpoint of positive y-axis is down, and positive x-axis is right. Given this assumption that leads to following image showing the gradient orientation vectors around the circle:

This can be difficult to get used to since the axes are flipped from what we are normally used to in math class... So, gradient orientation is the angle made by the normal vector to the gradient surface in the direction of increasing change.

Hope you found that helpful!

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!