How to Show the Weight or Bias in a Neural Network?
34 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
How to show the weight/bias from every layer in my neural network? I am doing a feedforward neural network with 2 hidden layers. Furthermore, how to determine how many hidden layers should I use in a neural network? Currently I have 3 inputs and 1 output. When I want to increase the hidden layer to 3, an error occurred saying that I have not sufficient of input for 3 hidden layers.
1 Commento
Shani
il 19 Nov 2013
I am trying to create a neural network, would you have any notes that I can use to aid me with that at all? please
Risposta accettata
Greg Heath
il 19 Apr 2013
Modificato: Greg Heath
il 19 Apr 2013
1. If the input/output transformation function is reasonably well behaved, 1 hidden layer is sufficient. The resulting net is a universal approximator.
2. However, if you need a ridiculously high number of hidden nodes, H, ( especially if the number of unknown weights Nw = (I+1)*H+(H+1)*O approaches or exceeds the number of training equations Ntrneq = Ntrn*O), you can reduce the total number of nodes by introducing a second hidden layer.
[ I Ntrn ] = size(trninput)
[ O Ntrn ] = size(trntarget)
3. Nevertheless, it is usually better to stick with 1 hidden layer and use a validation stopping subset (the default) and/or a regularized objective function (an option of mse: help mse) or a regularization training function (help trainbr)
4. Sometimes a ridiculously high number of weights is the result of using a ridiculously high number of inputs. So, it may be worthwhile to consider input subset selection before resorting to a second hidden layer.
For a single hidden layer
weights = getwb(net)
= [ Iw(:); b1(:); Lw(:); b2(:) ]
where
Iw = cell2net(net.IW)
b1 = cell2mat(net.b(1))
Lw = cell2mat(net.Lw)
b2 = cell2mat(net.b(2))
You can try an example if you want to see how getwb orders weights with 2 hidden layers.
Hope this helps.
Thank you for formally accepting my answer
Greg
2 Commenti
Sai Kumar Dwivedi
il 18 Mar 2015
The explanation you gave on your 2nd point
(I+1)*H+(H+1)*O < Ntrn*O
Is this some kind of heuristic or does it have mathematical background?
Più risposte (2)
Manu R
il 6 Mar 2011
Modificato: John Kelly
il 19 Nov 2013
Neural net objects in MATLAB have fields you can access to determine layer weights and biases.
Suppose:
mynet = feedforwardnet % Just a toy example, without any training
weights = mynet.LW
biases = mynet.b
% weight and bias values:
%
% IW: {2x1 cell} containing 1 input weight matrix
% LW: {2x2 cell} containing 1 layer weight matrix
% b: {2x1 cell} containing 2 bias vectors
2 Commenti
Vedere anche
Categorie
Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!