How to utilize Hebbian learning?

前端 未结 4 1604
天涯浪人
天涯浪人 2021-01-30 17:43

I want to upgrade my evolution simulator to use Hebb learning, like this one. I basically want small creatures to be able to learn how to find food. I achieved that with the bas

4条回答
  •  滥情空心
    2021-01-30 18:29

    Although Hebbian learning, as a general concept, forms the basis for many learning algorithms, including backpropagation, the simple, linear formula which you use is very limited. Not only do weights rise infinitely, even when the network has learned all the patterns, but the network can perfectly learn only orthogonal (linearly independent) patterns.

    Linear Hebbian learning is not even biologically plausible. Biological neural networks are much bigger than yours and are highly non-linear, both the neurons and the synapses between them. In big, non-linear networks, the chances that your patterns are close to orthogonal are higher.

    So, if you insist on using a neural network, I suggest adding hidden layers of neurons and introducing non-linearities, both in the weights, e.g. as fraxel proposed, and in firing of neurons---here you might use a sigmoid function, like tanh (yes, using negative values for "non-firing" is good since it can lead to reducing weights). In its generalized form, Hebbian rule can be expressed as

    weight_change = learning_rate * f1(input, weight) * f2(output, target_output)
    

    where f1 and f2 are some functions. In your case, there is no target_output, so f2 is free to ignore it.

    In order to have neurons in your hidden layers fire, and thus to get a connection between input and output, you can initialize the weights to random values.

    But is a neural network really necessary, or even suitable for your problem? Have you considered simple correlation? I mean, Hebb derived his rule to explain how learning might function in biological systems, not as the best possible machine learning algorithm.

提交回复
热议问题