How to fit the 2D scatter data with a line with C++

前端 未结 7 1586
春和景丽
春和景丽 2020-12-16 23:19

I used to work with MATLAB, and for the question I raised I can use p = polyfit(x,y,1) to estimate the best fit line for the scatter data in a plate. I was wondering which r

相关标签:
7条回答
  • 2020-12-16 23:51

    You can also use or go over this implementation there is also documentation here.

    0 讨论(0)
  • 2020-12-16 23:53

    I would suggest coding it from scratch. It is a very simple implementation in C++. You can code up both the intercept and gradient for least-squares fit (the same method as polyfit) from your data directly from the formulas here

    http://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line

    These are closed form formulas that you can easily evaluate yourself using loops. If you were using higher degree fits then I would suggest a matrix library or more sophisticated algorithms but for simple linear regression as you describe above this is all you need. Matrices and linear algebra routines would be overkill for such a problem (in my opinion).

    0 讨论(0)
  • 2020-12-16 23:55

    Note: This answer is NOT AN ANSWER TO THIS QUESTION but to this one "Line closest to a set of points" that has been flagged as "duplicate" of this one (incorrectly in my opinion), no way to add new answers to it.

    The question asks for:

    Find the line whose distance from all the points is minimum ? By distance I mean the shortest distance between the point and the line.

    The most usual interpretation of distance "between the point and the line" is the euclidean distance and the most common interpretation of "from all points" is the sum of distances (in absolute or squared value).

    When the target is minimize the sum of squared euclidean distances, the linear regression (LST) is not the algorithm to use. In addition, linear regression can not result in a vertical line. The algorithm to be used is the "total least squares". See by example wikipedia for the problem description and this answer in math stack exchange for details about the formulation.

    0 讨论(0)
  • 2020-12-17 00:02

    Fitting a Line can be acomplished in different ways. Least Square means minimizing the sum of the squared distance. But you could take another cost function as example the (not squared) distance. But normaly you use the squred distance (Least Square). There is also a possibility to define the distance in different ways. Normaly you just use the "y"-axis for the distance. But you could also use the total/orthogonal distance. There the distance is calculated in x- and y-direction. This can be a better fit if you have also errors in x direction (let it be the time of measurment) and you didn't start the measurment on the exact time you saved in the data. For Least Square and Total Least Square Line fit exist algorithms in closed form. So if you fitted with one of those you will get the line with the minimal sum of the squared distance to the datapoints. You can't fit a better line in the sence of your defenition. You could just change the definition as examples taking another cost function or defining distance in another way.

    There is a lot of stuff about fitting models into data you could think of, but normaly they all use the "Least Square Line Fit" and you should be fine most times. But if you have a special case it can be necessary to think about what your doing. Taking Least Square done in maybe a few minutes. Thinking about what Method fits you best to the problem envolves understanding the math, which can take indefinit time :-).

    0 讨论(0)
  • 2020-12-17 00:08

    to fit a line y=param[0]x+param[1] simply do this:

    // loop over data:
    {               
    sum_x += x[i];
    sum_y += y[i];
    sum_xy += x[i] * y[i];
    sum_x2 += x[i] * x[i];
    }
    
    // means
    double mean_x = sum_x / ninliers;
    double mean_y = sum_y / ninliers;
    
    float varx = sum_x2 - sum_x * mean_x;
    float cov = sum_xy - sum_x * mean_y;
    

    // check for zero varx

    param[0] = cov / varx;
    param[1] = mean_y - param[0] * mean_x;
    

    More on the topic http://easycalculation.com/statistics/learn-regression.php (formulas are the same, they just multiplied and divided by N, a sample sz.). If you want to fit plane to 3D data use a similar approach - http://www.mymathforum.com/viewtopic.php?f=13&t=8793

    Disclaimer: all quadratic fits are linear and optimal in a sense that they reduce the noise in parameters. However, you might interested in the reducing noise in the data instead. You might also want to ignore outliers since they can bia s your solutions greatly. Both problems can be solved with RANSAC. See my post at:

    0 讨论(0)
  • 2020-12-17 00:13

    This page describes the algorithm easier than Wikipedia, without extra steps to calculate the means etc. : http://faculty.cs.niu.edu/~hutchins/csci230/best-fit.htm . Almost quoted from there, in C++ it's:

    #include <vector>
    #include <cmath>
    
    struct Point {
      double _x, _y;
    };
    struct Line {
      double _slope, _yInt;
      double getYforX(double x) {
        return _slope*x + _yInt;
      }
      // Construct line from points
      bool fitPoints(const std::vector<Point> &pts) {
        int nPoints = pts.size();
        if( nPoints < 2 ) {
          // Fail: infinitely many lines passing through this single point
          return false;
        }
        double sumX=0, sumY=0, sumXY=0, sumX2=0;
        for(int i=0; i<nPoints; i++) {
          sumX += pts[i]._x;
          sumY += pts[i]._y;
          sumXY += pts[i]._x * pts[i]._y;
          sumX2 += pts[i]._x * pts[i]._x;
        }
        double xMean = sumX / nPoints;
        double yMean = sumY / nPoints;
        double denominator = sumX2 - sumX * xMean;
        // You can tune the eps (1e-7) below for your specific task
        if( std::fabs(denominator) < 1e-7 ) {
          // Fail: it seems a vertical line
          return false;
        }
        _slope = (sumXY - sumX * yMean) / denominator;
        _yInt = yMean - _slope * xMean;
        return true;
      }
    };
    

    Please, be aware that both this algorithm and the algorithm from Wikipedia ( http://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line ) fail in case the "best" description of points is a vertical line. They fail because they use

    y = k*x + b 
    

    line equation which intrinsically is not capable to describe vertical lines. If you want to cover also the cases when data points are "best" described by vertical lines, you need a line fitting algorithm which uses

    A*x + B*y + C = 0
    

    line equation. You can still modify the current algorithm to produce that equation:

    y = k*x + b <=>
    y - k*x - b = 0 <=>
    B=1, A=-k, C=-b
    

    In terms of the above code:

    B=1, A=-_slope, C=-_yInt
    

    And in "then" block of the if checking for denominator equal to 0, instead of // Fail: it seems a vertical line, produce the following line equation:

    x = xMean <=>
    x - xMean = 0 <=>
    A=1, B=0, C=-xMean
    

    I've just noticed that the original article I was referring to has been deleted. And this web page proposes a little different formula for line fitting: http://hotmath.com/hotmath_help/topics/line-of-best-fit.html

    double denominator = sumX2 - 2 * sumX * xMean + nPoints * xMean * xMean;
    ...
    _slope = (sumXY - sumY*xMean - sumX * yMean + nPoints * xMean * yMean) / denominator;
    

    The formulas are identical because nPoints*xMean == sumX and nPoints*xMean*yMean == sumX * yMean == sumY * xMean.

    0 讨论(0)
提交回复
热议问题