Resilient backpropagation neural network - question about gradient

隐身守侯 提交于 2019-11-29 01:15:50

问题


First I want to say that I'm really new to neural networks and I don't understand it very good ;)

I've made my first C# implementation of the backpropagation neural network. I've tested it using XOR and it looks it work.

Now I would like change my implementation to use resilient backpropagation (Rprop - http://en.wikipedia.org/wiki/Rprop).

The definition says: "Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude), and acts independently on each "weight".

Could somebody tell me what partial derivative over all patterns is? And how should I compute this partial derivative for a neuron in hidden layer.

Thanks a lot

UPDATE:

My implementation base on this Java code: www_.dia.fi.upm.es/~jamartin/downloads/bpnn.java

My backPropagate method looks like this:

public double backPropagate(double[] targets)
    {
        double error, change;

        // calculate error terms for output
        double[] output_deltas = new double[outputsNumber];

        for (int k = 0; k < outputsNumber; k++)
        {

            error = targets[k] - activationsOutputs[k];
            output_deltas[k] = Dsigmoid(activationsOutputs[k]) * error;
        }

        // calculate error terms for hidden
        double[] hidden_deltas = new double[hiddenNumber];

        for (int j = 0; j < hiddenNumber; j++)
        {
            error = 0.0;

            for (int k = 0; k < outputsNumber; k++)
            {
                error = error + output_deltas[k] * weightsOutputs[j, k];
            }

            hidden_deltas[j] = Dsigmoid(activationsHidden[j]) * error;
        }

        //update output weights
        for (int j = 0; j < hiddenNumber; j++)
        {
            for (int k = 0; k < outputsNumber; k++)
            {
                change = output_deltas[k] * activationsHidden[j];
                weightsOutputs[j, k] = weightsOutputs[j, k] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumOutpus[j, k];
                lastChangeWeightsForMomentumOutpus[j, k] = change;

            }
        }

        // update input weights
        for (int i = 0; i < inputsNumber; i++)
        {
            for (int j = 0; j < hiddenNumber; j++)
            {
                change = hidden_deltas[j] * activationsInputs[i];
                weightsInputs[i, j] = weightsInputs[i, j] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumInputs[i, j];
                lastChangeWeightsForMomentumInputs[i, j] = change;
            }
        }

        // calculate error
        error = 0.0;

        for (int k = 0; k < outputsNumber; k++)
        {
            error = error + 0.5 * (targets[k] - activationsOutputs[k]) * (targets[k] - activationsOutputs[k]);
        }

        return error;
    }

So can I use change = hidden_deltas[j] * activationsInputs[i] variable as a gradient (partial derivative) for checking the sing?


回答1:


I think the "over all patterns" simply means "in every iteration"... take a look at the RPROP paper

For the paritial derivative: you've already implemented the normal back-propagation algorithm. This is a method for efficiently calculate the gradient... there you calculate the δ values for the single neurons, which are in fact the negative ∂E/∂w values, i.e. the parital derivative of the global error as function of the weights.

so instead of multiplying the weights with these values, you take one of two constants (η+ or η-), depending on whether the sign has changed




回答2:


The following is an example of a part of an implementation of the RPROP training technique in the Encog Artificial Intelligence Library. It should give you an idea of how to proceed. I would recommend downloading the entire library, because it will be easier to go through the source code in an IDE rather than through the online svn interface.

http://code.google.com/p/encog-cs/source/browse/#svn/trunk/encog-core/encog-core-cs/Neural/Networks/Training/Propagation/Resilient

http://code.google.com/p/encog-cs/source/browse/#svn/trunk

Note the code is in C#, but shouldn't be difficult to translate into another language.



来源:https://stackoverflow.com/questions/2865057/resilient-backpropagation-neural-network-question-about-gradient

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!