Output of neural network is offset and scaled.. help!?
    6 visualizzazioni (ultimi 30 giorni)
  
       Mostra commenti meno recenti
    
    Søren Jensen
      
 il 28 Apr 2015
  
    
    
    
    
    Risposto: Søren Jensen
      
 il 29 Apr 2015
            I am trying to simulate the outputs for a neural network myself for later translation to java so i can run it on a mobile device. For this i generated the following simulation code for a network with two hidden layers and tangent-sigmoid nonlinear function at all layers:
function [ Results ] = sim_net( net, input )
    y1 = tansig(net.IW{1} * input + net.b{1});
    y2 = tansig(net.LW{2} * y1 + net.b{2});
    Results = tansig(net.LW{6} * y2 + net.b{3});
end
The sim_net function is then held up against matlab's own functions using the following code:
clc
clear all
net = feedforwardnet([20 20]);
net.divideParam.trainRatio = 75/100; % Adjust as desired
net.divideParam.valRatio = 15/100; % Adjust as desired
net.divideParam.testRatio = 10/100; % Adjust as desired
net.inputs{1}.processFcns = {}; % no preprocessing
net.outputs{2}.processFcns = {};
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'tansig';
net.layers{3}.transferFcn = 'tansig';
% Train and Apply Network
[x,t] = simplefit_dataset;
[net,tr] = train(net,x,t);
for i=1:length(x)
disp(i) = sim_net(net,x(i));
disp2(i) = sim(net,x(i));
end
plot(disp)
hold on
plot(disp2)
legend('our code','matlabs code')
the plot of the two outputs:

however, a quick inspection using the following edit reveals that matlabs results are offset by a factor of 5, and scaled by a factor of 5 also
plot(disp)
hold on
plot((disp2-5)/5+0.1)
legend('our code','matlabs code')

However, matlab's net function shouldn't even be able to give values above 1 when using tansig as the last activation function?
1 Commento
  Greg Heath
      
      
 il 28 Apr 2015
				One hidden layer is sufficient since it is also a universal approximator.
The fewer weights that are used, the more robust the design.
Risposta accettata
  Greg Heath
      
      
 il 28 Apr 2015
        
      Modificato: Greg Heath
      
      
 il 28 Apr 2015
  
      The function input should be weights, instead of the net.
Is LW{6} a typo?
One hidden layer is sufficient since it is a universal approximator.
Since the function is smooth with four local extrema, only four hidden nodes are necessary.
The fewer weights that are used, the more robust the design. Also, this will make your java coding much easier.
Please use the notation h to denote the output from a hidden layer.
Your output can have any scale because of the default normalization/denormalization used within train
Code should be faster if you use dummy variables like IW = net.IW{1,1}, b1 = , b2 = , LW = ..
Thank you for formally accepting my answer
Greg
0 Commenti
Più risposte (1)
Vedere anche
Categorie
				Scopri di più su Modeling and Prediction with NARX and Time-Delay Networks in Help Center e File Exchange
			
	Prodotti
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!


