emgucv: pan card improper skew detection in C#

北慕城南 提交于 2020-06-29 04:00:09

问题


I am having three image of pan card for testing skew of image using emgucv and c#.

1st image which is on top Detected 180 degree working properly.

2nd image which is in middle Detected 90 dgree should detected as 180 degree.

3rd image Detected 180 degree should detected as 90 degree.

One observation I am having that i wanted to share here is when i crop unwanted part of image from up and down side of pan card using paint brush, it gives me expected result using below mention code.

Now i wanted to understand how i can remove the unwanted part using programming. I have played with contour and roi but I am not able to figure out how to fit the same. I am not able to understand whether emgucv itself selects contour or I have to do something.

Please suggest any suitable code example.

Please check code below for angle detection and please help me. Thanks in advance.

imgInput = new Image<Bgr, byte>(impath);
          Image<Gray, Byte> img2 = imgInput.Convert<Gray, Byte>();
          Bitmap imgs;
          Image<Gray, byte> imgout = imgInput.Convert<Gray, byte>().Not().ThresholdBinary(new Gray(50), new Gray(125));
          VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
          Emgu.CV.Mat hier = new Emgu.CV.Mat();
          var blurredImage = imgInput.SmoothGaussian(5, 5, 0 , 0);
          CvInvoke.AdaptiveThreshold(imgout, imgout, 255, Emgu.CV.CvEnum.AdaptiveThresholdType.GaussianC, Emgu.CV.CvEnum.ThresholdType.Binary, 5, 45);

          CvInvoke.FindContours(imgout, contours, hier, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
          if (contours.Size >= 1)
          {
              for (int i = 0; i <= contours.Size; i++)
              {

                  Rectangle rect = CvInvoke.BoundingRectangle(contours[i]);
                  RotatedRect box = CvInvoke.MinAreaRect(contours[i]);
                  PointF[] Vertices = box.GetVertices();
                  PointF point = box.Center;
                  PointF edge1 = new PointF(Vertices[1].X - Vertices[0].X, Vertices[1].Y - Vertices[0].Y);
                  PointF edge2 = new PointF(Vertices[2].X - Vertices[1].X, Vertices[2].Y - Vertices[1].Y);
                  double r = edge1.X + edge1.Y;
                  double edge1Magnitude = Math.Sqrt(Math.Pow(edge1.X, 2) + Math.Pow(edge1.Y, 2));
                  double edge2Magnitude = Math.Sqrt(Math.Pow(edge2.X, 2) + Math.Pow(edge2.Y, 2));
                  PointF primaryEdge = edge1Magnitude > edge2Magnitude ? edge1 : edge2;
                  double primaryMagnitude = edge1Magnitude > edge2Magnitude ? edge1Magnitude : edge2Magnitude;
                  PointF reference = new PointF(1, 0);
                  double refMagnitude = 1;
                  double thetaRads = Math.Acos(((primaryEdge.X * reference.X) + (primaryEdge.Y * reference.Y)) / (primaryMagnitude * refMagnitude));
                  double thetaDeg = thetaRads * 180 / Math.PI;
                  imgInput = imgInput.Rotate(thetaDeg, new Bgr());
                  imgout = imgout.Rotate(box.Angle, new Gray());
                  Bitmap bmp = imgout.Bitmap;
                  break;
              }

          }

回答1:


The Problem

Let us start with the problem before the solution:

Your Code

When you submit code, asking for help, at least make some effort to "clean" it. Help people help you! There's so many lines of code here that do nothing. You declare variables that are never used. Add some comments that let people know what it is that you think your code should do.

Bitmap imgs;
var blurredImage = imgInput.SmoothGaussian(5, 5, 0, 0);
Rectangle rect = CvInvoke.BoundingRectangle(contours[i]);
PointF point = box.Center;
double r = edge1.X + edge1.Y;
// Etc

Adaptive Thresholding

The following line of code produces the following images:

 CvInvoke.AdaptiveThreshold(imgout, imgout, 255, Emgu.CV.CvEnum.AdaptiveThresholdType.GaussianC, Emgu.CV.CvEnum.ThresholdType.Binary, 5, 45);

Image 1

Image 2

Image 3

Clearly this is not what you're aiming for since the primary contour, the card edge, is completely lost. As a tip, you can always use the following code to display images at runtime to help you with debugging.

CvInvoke.NamedWindow("Output");
CvInvoke.Imshow("Output", imgout);
CvInvoke.WaitKey();

The Soltuion

Since your in example images the card is primarily a similar Value (in the HSV sense) to the background. I do not think simple gray scale thresholding is the correct approach in this case. I purpose the following:

Algorithm

  1. Use Canny Edge Detection to extract the edges in the image.

  2. Dilate the edges so as the card content combines.

  3. Use Contour Detection to filter for the combined edges with the largest bounding.

  4. Fit this primary contour with a rotated rectangle in order to extract the corner points.

  5. Use the corner points to define a transformation matrix to be applied using WarpAffine.

  6. Warp and crop the image.

The Code

You may wish to experiment with the parameters of the Canny Detection and Dilation.

// Working Images
Image<Bgr, byte> imgInput = new Image<Bgr, byte>("Test1.jpg");
Image<Gray, byte> imgEdges = new Image<Gray, byte>(imgInput.Size);
Image<Gray, byte> imgDilatedEdges = new Image<Gray, byte>(imgInput.Size);
Image<Bgr, byte> imgOutput;

// 1. Edge Detection
CvInvoke.Canny(imgInput, imgEdges, 25, 80);

// 2. Dilation
CvInvoke.Dilate(
    imgEdges,
    imgDilatedEdges,
    CvInvoke.GetStructuringElement(
        ElementShape.Rectangle,
        new Size(3, 3),
        new Point(-1, -1)),
    new Point(-1, -1),
    5,
    BorderType.Default,
    new MCvScalar(0));

// 3. Contours Detection
VectorOfVectorOfPoint inputContours = new VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(
    imgDilatedEdges,
    inputContours,
    hierarchy,
    RetrType.External,
    ChainApproxMethod.ChainApproxSimple);
VectorOfPoint primaryContour = (from contour in inputContours.ToList()
                                orderby contour.GetArea() descending
                                select contour).FirstOrDefault();

// 4. Corner Point Extraction
RotatedRect bounding = CvInvoke.MinAreaRect(primaryContour);
PointF topLeft = (from point in bounding.GetVertices()
                  orderby Math.Sqrt(Math.Pow(point.X, 2) + Math.Pow(point.Y, 2))
                  select point).FirstOrDefault();
PointF topRight = (from point in bounding.GetVertices()
                  orderby Math.Sqrt(Math.Pow(imgInput.Width - point.X, 2) + Math.Pow(point.Y, 2))
                  select point).FirstOrDefault();
PointF botLeft = (from point in bounding.GetVertices()
                  orderby Math.Sqrt(Math.Pow(point.X, 2) + Math.Pow(imgInput.Height - point.Y, 2))
                  select point).FirstOrDefault();
PointF botRight = (from point in bounding.GetVertices()
                   orderby Math.Sqrt(Math.Pow(imgInput.Width - point.X, 2) + Math.Pow(imgInput.Height - point.Y, 2))
                   select point).FirstOrDefault();
double boundingWidth = Math.Sqrt(Math.Pow(topRight.X - topLeft.X, 2) + Math.Pow(topRight.Y - topLeft.Y, 2));
double boundingHeight = Math.Sqrt(Math.Pow(botLeft.X - topLeft.X, 2) + Math.Pow(botLeft.Y - topLeft.Y, 2));
bool isLandscape = boundingWidth > boundingHeight;

// 5. Define warp crieria as triangles              
PointF[] srcTriangle = new PointF[3];
PointF[] dstTriangle = new PointF[3];
Rectangle ROI;
if (isLandscape)
{
    srcTriangle[0] = botLeft;
    srcTriangle[1] = topLeft;
    srcTriangle[2] = topRight;
    dstTriangle[0] = new PointF(0, (float)boundingHeight);
    dstTriangle[1] = new PointF(0, 0);
    dstTriangle[2] = new PointF((float)boundingWidth, 0);
    ROI = new Rectangle(0, 0, (int)boundingWidth, (int)boundingHeight);
}
else
{
    srcTriangle[0] = topLeft;
    srcTriangle[1] = topRight;
    srcTriangle[2] = botRight;
    dstTriangle[0] = new PointF(0, (float)boundingWidth);
    dstTriangle[1] = new PointF(0, 0);
    dstTriangle[2] = new PointF((float)boundingHeight, 0);
    ROI = new Rectangle(0, 0, (int)boundingHeight, (int)boundingWidth);
}
Mat warpMat = new Mat(2, 3, DepthType.Cv32F, 1);
warpMat = CvInvoke.GetAffineTransform(srcTriangle, dstTriangle);

// 6. Apply the warp and crop
CvInvoke.WarpAffine(imgInput, imgInput, warpMat, imgInput.Size);
imgOutput = imgInput.Copy(ROI);
imgOutput.Save("Output1.bmp");

Two extension methods are used:

static List<VectorOfPoint> ToList(this VectorOfVectorOfPoint vectorOfVectorOfPoint)
{
    List<VectorOfPoint> result = new List<VectorOfPoint>();
    for (int contour = 0; contour < vectorOfVectorOfPoint.Size; contour++)
    {
        result.Add(vectorOfVectorOfPoint[contour]);
    }
    return result;
}

static double GetArea(this VectorOfPoint contour)
{
    RotatedRect bounding = CvInvoke.MinAreaRect(contour);
    return bounding.Size.Width * bounding.Size.Height;
}

Outputs

Meta Example



来源:https://stackoverflow.com/questions/62550517/emgucv-pan-card-improper-skew-detection-in-c-sharp

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!