问题
I'm looking for an algorithm that is able to find trends in large amounts of data. For instance, if one is given time t
and a variable x
, (t,x)
, and given input such as {(1,1), (2,4), (3,9), (4,16)}
, it should be able to figure out that the value of x
for t=5
is 25. How is this normally implemented? Do most algorithms compute lines of best fit that are linear, quadratic, exponential, etc. and then chooses the line of best fit with the lowest standard deviation? Are there other techniques for finding trends in data? Also, what happens when you increase the number of variables to analyze large vectors?
回答1:
This is a really complex question, try to start from: http://en.wikipedia.org/wiki/Interpolation
回答2:
There is no simple answer for a complex problem: http://en.wikipedia.org/wiki/Regression_analysis
回答3:
A Neural network might be a good candidate. Especially if you want to learn it something nonlinear.
来源:https://stackoverflow.com/questions/19032717/algorithm-for-finding-trends-in-data