Time Series Prediction via Neural Networks

后端 未结 3 1463
闹比i
闹比i 2020-12-13 02:54

I have been working on Neural Networks for various purposes lately. I have had great success in digit recognition, XOR, and various other easy/hello world\'ish applications.

相关标签:
3条回答
  • 2020-12-13 03:21

    I think that you've got the basic idea: a "sliding window" approach where a network is trained to use the last k values of a series (Tn-k ... Tn-1) to predict the current value (Tn).

    There are a lot of ways you can do this, however. For example:

    • How big should that window be?
    • Should the data be preprocessed in any way (e.g. to remove outliers)?
    • What network configuration (e.g. # of hidden nodes, # of layers) and algorithm should be used?

    Often people end up figuring out the best way to learn from their particular data by trial and error.

    There are a fair number of publicly-accessible papers out there about this stuff. Start with these, and look at their citations and papers that cite them via Google Scholar, and you should have plenty to read:

    • Frank, R. J. and Davey, N. and Hunt, S. P. Time Series Prediction and Neural Networks. Journal of Intelligent and Robotic Systems, 2001. Volume 31, Issue 1, pp. 91-103.
    • J.T. Connor, R.D. Martin, and L.E. Atlas. Recurrent neural networks and robust time series prediction. IEEE Transactions on Neural Networks, Mar 1994. Volume 5, Issue 2, pp. 240 - 254.
    0 讨论(0)
  • 2020-12-13 03:29

    There is a kind of neural networks named recurrent neural networks (RNNs. One advantage of using these models is you do not have to define an sliding window for the input examples. A variant of RNNs known as Long-Short Term Memory (LSTM) can potentially take into account many instances in the previous time stamps and a "forget gate" is used to allow or disallow remembering the previous results from the previous time stamps.

    0 讨论(0)
  • 2020-12-13 03:36

    Technically this is the same as your digit recognition - it is recognizing something and returning what it was...

    Well - now your inputs are the previous steps (T-5 ... T-1) - and your output or outputs are the predicted steps (T0, T1...).

    The mechanics in the ANN itself are the same - you will have to teach every layer for feature detection, correcting its reconstruction of the thing, so that it looks like what is actually going to happen.

    (some more info about what do I mean: tech talk )

    0 讨论(0)
提交回复
热议问题