问题
I created a neural network matlab. This is the script:
    load dati.mat;
    inputs=dati(:,1:8)';
    targets=dati(:,9)';
    hiddenLayerSize = 10;
    net = patternnet(hiddenLayerSize);
    net.inputs{1}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};
    net.outputs{2}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};
    net = struct(net);
    net.inputs{1}.processParams{2}.ymin = 0;
    net.inputs{1}.processParams{4}.maxfrac = 0.02;
    net.outputs{2}.processParams{4}.maxfrac = 0.02;
    net.outputs{2}.processParams{2}.ymin = 0;
    net = network(net);
    net.divideFcn = 'divideind';  
    net.divideMode = 'sample';  % Divide up every sample
    net.divideParam.trainInd = 1:428;
    net.divideParam.valInd = 429:520;
    net.divideParam.testInd = 521:612;
    net.trainFcn = 'trainscg';  % Scaled conjugate gradient backpropagation
    net.performFcn = 'mse';  % Mean squared error
    net.plotFcns = {'plotperform','plottrainstate','ploterrhist', 'plotregression', 'plotconfusion', 'plotroc'};
    net=init(net);
    net.trainParam.max_fail=20;
    [net,tr] = train(net,inputs,targets);
    outputs = net(inputs);
    errors = gsubtract(targets,outputs);
    performance = perform(net,targets,outputs)
Now I want to save the weights and biases of the network and write the equation. I had saved the weights and biases:
    W1=net.IW{1,1};
    W2=net.LW{2,1};
    b1=net.b{1,1};
    b2=net.b{2,1};
So, I've done the data preprocessing and I wrote the following equation
    max_range=0;
    [y,ps]=removeconstantrows(input, max_range);
    ymin=0;
    ymax=1;
    [y,ps2]=mapminmax(y,ymin,ymax);
    ymean=0;
    ystd=1;
    y=mapstd(x,ymean,ystd);
    maxfrac=0.02;
    y=processpca(y,maxfrac);
    in=y';
    uscita=tansig(W2*(tansig(W1*in+b1))+b2);
But with the same input input=[1:8] I get different results. why? What's wrong? Help me please! It's important!
I use Matlab R2010B
回答1:
It looks like you are pre-processing the inputs but not post-processing the outputs. Post processing uses the "reverse" processing form. (Targets are pre-processed, so outputs are reverse processed).
回答2:
This equation
uscita=tansig(W2*(tansig(W1*in+b1))+b2); 
is wrong. Why do you write two tansig? You have 10 nerouns you should write it 10 times or use for i=1:10;
来源:https://stackoverflow.com/questions/8837600/equation-that-compute-a-neural-network-in-matlab