Liquid State Machine: How it works and how to use it?

前端 未结 3 2075
傲寒
傲寒 2021-01-30 09:38

I am now learning about LSM (Liquid State Machines), and I try to understand how they are used for learning.

I am pretty confused from what I read over the web.

3条回答
  •  独厮守ぢ
    2021-01-30 09:59

    From your questions, it seems that you are on the right track. Anyhow, the Liquid State Machine and Echo State machine are complex topics that deal with computational neuroscience and physics, topics like chaos, dynamic action system, and feedback system and machine learning. So, it’s ok if you feel like it’s hard to wrap your head around it.

    To answer your questions:

    1. Most implementations of Liquid State Machines using the reservoir of neurons untrained. There have been some attempts to train the reservoir but they haven't had the dramatic success that justifies the computational power that is needed for this aim. (See: Reservoir Computing Approaches to Recurrent Neural Network Training) or (The p-Delta Learning Rule for Parallel Perceptrons )

      My opinion is that if you want to use the Liquid as classifier in terms of separability or generalization of pattern, you can gain much more from the way the neurons connect between each other (see Hazan, H. and Manevitz, L., Topological constraints and robustness in liquid state machines, Expert Systems with Applications, Volume 39, Issue 2, Pages 1597-1606, February 2012.) or (Which Model to Use for the Liquid State Machine?) The biological approach (in my opinion the most interesting one) (What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? )
    2. You are right, you need to wait at least until you finish giving the input, otherwise you risk in detect your input, and not the activity that occurs as a result from your input as it should be.
    3. Yes, you can imagine that your liquid complexity is a kernel in SVM that try to project the data points to some hyperspace and the detector in the liquid as the part that try to separate between the classes in the dataset. As a rule of the thumb, the number of neurons and the way they connect between each other determine the degree of complexity of the liquid.

    Regarding LIF (Leaky Integrate & Fire neurons), as I see it (I could be wrong) the big difference between the two approaches is the individual unit. In liquid state machine uses biological like neurons, and in the Echo state uses more analog units. So, in terms of “very short term memory” the Liquid State approach each individual neuron remembers its own history, where in the Echo state approach each individual neuron reacts based only on the current state, and therefore the memory stored in the activity between the units.

提交回复
热议问题