How can I improve the performance of a feed-forward backpropagation neural network?
Mostra commenti meno recenti
Hi, I am working with MATLAB R2013a to build a prediction neural network model. I have tried to use different training algorithms, activation functions and number of hidden neurons but still can't get the R more than 0.8 for training set, validation set and testing set. The R of training set for some networks can be more than 0.8 but provide low R values (around 0.4~0.5) for validation and testing set. Below are the codes. Is there any solutions to improve the performance and R value?
inputs<48x206>, targets<5x206>
inputs = inputs;
targets = targets;
hiddenLayerSize = 15;
net = fitnet(hiddenLayerSize);
net.layers{1}.transferFcn='tansig';
net.layers{2}.transferFcn='purelin';
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
net.divideFcn = 'dividerand';
net.divideMode = 'sample';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net.trainFcn = 'traincgp';
net.performFcn = 'mse';
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ... 'plotregression', 'plotfit'};
[net,tr] = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
view(net)
1 Commento
pepper yuan
il 30 Mar 2016
Hi, Jocelyn, have you solve the problem of improving the performance of neural network? As I'm dealing the problem same with you, can you provide me your email, so I can ask you some questions via the email. Appreciate if you can rely me.
Risposta accettata
Più risposte (3)
Jocelyn
il 28 Mar 2016
0 voti
1 Commento
Greg Heath
il 28 Mar 2016
1. You are still wasting time, space and attention by keeping statements that merely assign default values
2. How many inputs and outputs are you using after the variable reduction?
3. What happens when you use
net.divideFcn = 'dividetrain'
and try to minimize H using a double for loop as in my posts?
Hope this helps.
Greg
Jocelyn
il 12 Apr 2016
0 voti
7 Commenti
Tien Tran
il 12 Apr 2016
Hi Jocelyn You can give me your data (input and target). I will try to help if I can
Jocelyn
il 13 Apr 2016
Tien Tran
il 13 Apr 2016
Hi Jocelyn
I have tried your data, but I find that your data is unrealistic or lack of data points to train ANN effective.

Jocelyn
il 13 Apr 2016
Tien Tran
il 13 Apr 2016
If you just need high coefficient of determination (R or R^2) to present without require the quality, you can choose all data = training data; validation data = 70% of total data and testing data = 70% of total data. I do not recommend for this ideal.
Greg Heath
il 16 Apr 2016
If you post the data in *.m or *.txt, I may be able to take a look at it.
Again: My first impression is that you don't have enough data to accurately deal with 48 inputs.
Greg
Jocelyn
il 16 Apr 2016
Jocelyn
il 19 Apr 2016
0 voti
6 Commenti
Greg Heath
il 20 Apr 2016
Modificato: Greg Heath
il 20 Apr 2016
I consider it not only feasible, but NECESSARY to initialize the RNG before training the 1st of multiple designs.
How else could you duplicate your work?
Have you searched for any of my design examples?
ynew = net(xnew)
yeids new answers
Hope this helps.
Greg
Jocelyn
il 20 Apr 2016
Greg Heath
il 20 Apr 2016
1. It is more appropriate to use the coefficient of determination
(See Wikipedia), Rsq = 1-NMSE as the performance function.
2. Rsq is the fraction of the mean target variance that is
modeled by the net.
3. NMSE is the normalized mean-square-error. The normalization
denominator is the mean target variance.
4. Your goal of R > 0.8 is equivalent to Rsq > 0.64, that is, you are
only requiring the net to model 64% of the target variance.
5. Typically, I advise multivariable regression designers to
a. Practice on MATLAB example data obtained from
help nndatasets
doc nndatasets
b. Standardize data to zero-mean/unit-variance. Then
i. All variables are on an equal footing
ii. NMSE = MSE
c. Use MINMAX to find outliers that should be modified
or deleted.
d. Plot inputs, targets and targets vs inputs to get a feel
for the data.
e. Determine how well data is linearly related via the
correlation coefficient matrix and or a linear model.
i. Strong correlations among inputs indicates that the
input dimensions should probably be reduced via input
elimination and/or combination.
ii. Strong correlations among targets indicate that the
output dimensions could be reduced. However,
sometimes highly correlated outputs make designs easier.
6. In the case of 48 inputs for 5 targets, it is very likely that
the input dimension can be substantially reduced via input
elimination and/or combination. In the latter case, the
contribution of individual inputs can be sorted out later.
Greg Heath
il 20 Apr 2016
Sometimes in high dimensional cases it is very fruitful to see how much of the target variance can be modeled by linear and quadratic classifiers.
Jocelyn
il 21 Apr 2016
Greg Heath
il 26 Apr 2016
For no overfitting
Ntrneq >= Nw
Which leads to
Hub = (Ntrneq - O)/(I + O + 1)
= (Ntrn*O - O)/(I + O +1)
~ (0.7*N*O - O )/ (I + O + 1)
Hope this helps.
Greg
Categorie
Scopri di più su Deep Learning Toolbox in Centro assistenza e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!