Neuralnetwork activation function

老子叫甜甜 提交于 2019-12-22 17:48:41

问题


This is beginner level question. I have several training inputs in binary and for the neural network I am using a sigmoid thresholding function SigmoidFn(Input1*Weights) where

SigmoidFn(x) =  1./(1+exp(-1.*x));

The use of the above function will give continuous real numbers. But, I want the output to be in binary since the network is a Hopfield neural net (single layer 5 input nodes and 5 output nodes). The problem which I am facing is I am unable to correctly understand the usage and implementation of the various thresholding fucntions. The weights given below are the true weights and provided in the paper. So, I am using the weights to generate several training examples, several output samples by keeping the weight fixed, that is just run the neural network several times.

Weights = [0.0  0.5  0.0  0.2  0.0
           0.0  0.0  1.0  0.0  0.0
           0.0  0.0  0.0  1.0  0.0
           0.0  1.0  0.0  0.0  0.0
           0.0  0.0  0.0 -0.6  0.0];


Input1 = [0,1,0,0,0]

x = Input1*Weights;   % x = 0 0 1 0 0
  1. As can be seen the result of the multiplication is the second row of the Weights. Is this a mere coincidence?

  2. Next,

    SigmoidFn  =  1./(1+exp(-1.*x))
    
    SigmoidFn =
    
    0.5000    0.5000    0.7311    0.5000    0.5000
    
  3. round(SigmoidFn)
    
    ans =
    
         1     1     1     1     1
    
  4. Input2 = [1,0,0,0,0]
    
    x = Input2*Weights
    
    x =  0  0.5000  0  0.2000  0
    SigmoidFn  =  1./(1+exp(-1.*x))
    
    SigmoidFn =  0.5000    0.6225    0.5000    0.5498    0.5000
    
    >> round(SigmoidFn)
    
    ans =
    
          1     1     1     1     1
    

    Is it a good practice to use the round function round(SigmoidFn(x)) . ? The result obtained is not correct. or how should I obtain binary result when I use any threshold function: (a) HArd Limit (b) Logistic sigmoid (c) Tanh

Can somebody please show the proper code for thresholding and a brief explanation of when to use which activation function?I mean there should be certain logic otherwise why are there different kinds of functions? EDIT : Implementation of Hopfield to recall the input pattern by successive iterations by keeping the weight fixed.

Training1 = [1,0,0,0,0];
offset = 0;
t = 1;
X(t,:) = Training1;
 err = 1; 
 while(err~=0)
  Out = X(t,:)*Weights > offset;
  err = ((Out - temp)*(Out - temp).')/numel(temp);
t = t+1
 X(t,:) = temp;
 end

回答1:


Hopfield networks do not use a sigmoid nonlinearity; the state of a node is simply updated to whether its weighted input is greater than or equal to its offset.

You want something like

output2 = Weights * Input1' >= offsets;

where offsets is the same size as Input1. I used Weights * Input1' instead of Input1 * Weights because most examples I have seen use left-multiplication for updating (that is, the rows of the weight matrix label the input nodes and the columns label the output nodes), but you will have to look at wherever you got your weight matrix to be sure.

You should be aware that you will have to perform this update operation many times before you converge to a fixed point which represents a stored pattern.

In response to your further questions, the weight matrix you have chosen does not store any memories that can be recalled with a Hopfield network. It contains a cycle 2 -> 3 -> 4 -> 2 ... that will not allow the network to converge.

In general you would recover a memory in a way similar to what you wrote in your edit:

X = [1,0,0,0,0];
offset = 0;
t = 1;
err = 1;
nIter = 100;

while err ~= 0 && t <= nIter
   prev = X;
   X = X * Weights >= offset;
   err = ~isequal(X, prev);
   t = t + 1;
end

if ~err
    disp(X);
end

If you refer to the wikipedia page, this is what's referred to as the synchronous update method.



来源:https://stackoverflow.com/questions/22768493/neuralnetwork-activation-function

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!