Need help regarding getting weight, bias and layer input matrix at each iteration

1 visualizzazione (ultimi 30 giorni)
I am new in neural network and using feedforward nn for xor gate and the network has one input layer(with 2 inputs), first layer(hidden layer with 5 neurons) & second layer (output layer with 1 neuron).
I need all matrixes associated with the network at each iteration. The final weight and bias matrixes can be found by using net.IW{1,1}, net.LW{2,1}, net.b{1}, net.b{2} but i need these matrixes at each iteration or epoch.
How can i get the weight and bias matrixes (at each epoch) between (i) input layer and first layer, (ii) first layer and output layer? Also need the first layer's output matrix.
My purpose is to convert all the weight matrixes in to Canonnical Signed Digit (CSD) matrixes and multiply them with the corresponding input matrixes (which are in decimal) to see the improvement of performence.
Is there any way to get these matrixes (at each epoch) and is it possible to multiply the CSD weight matrixes with corresponding decimal input matrixes in each epoch?
Please help me. Thankyou.

Risposta accettata

Greg Heath
Greg Heath il 3 Dic 2011
You only need 2 hidden neurons.
For best performance use bipolar [-1 1] inputs and TANSIG hidden nodes.
Use TANSIG for bipolar outputs and LOGSIG for unipolar [0 1]
Nepochs = 1
net.trainParam.epochs = Nepochs;
net.trainParam.show = inf; % Do your own plotting
Nloops = 100
for i = 1:Nloops
[net tr Y E] = train(net,p,t);
% Extract weights and multiply to obtain hidden node output. Since verything else that you want is in the LHS, that calculation may not be necessary. Store and/or plot whatever info you want from the LHS
%To see what is available set Nloops = 1 and remove the semicolon
end
WARNING:
You will not get the same performance as Nepochs = 100, Nloops = 1 because TRAIN reinitializes internal training parameters at each call.
Hope this helps.
Greg
  1 Commento
Ayesa
Ayesa il 6 Dic 2011
Thanks for your answer. But is there any way to get weight matrixes for each Epochs? Because the performence is not good. I assume that if epoch is increased, the performence will also increase.
And here what does Nloops mean? Does Nloops equal to the number of elements of the training set or anything else?

Accedi per commentare.

Più risposte (1)

Greg Heath
Greg Heath il 6 Dic 2011
Please reread the comments after the line
[net tr Y E] = ...
Since you have the updated net, you can obtain the updated weights and other info.
Have you done as suggested...i.e. remove the semicolon to see what info is available in net and tr?
Hope this helps.
Greg
  1 Commento
Ayesa
Ayesa il 6 Dic 2011
Yes i have done as you suggested. This is the code:
xorInputs=[-1 -1 1 1 -1 -1 1 1 ;-1 1 -1 1 -1 1 -1 1];
xorTargets=[-1 1 1 -1 -1 1 1 -1];
trainInd = 4;
valInd = 2;
testInd = 2;
trainInputs = xorInputs(:,1:1:trainInd);
trainTargets = xorTargets(:,1:1:trainInd);
valInputs = xorInputs(:,trainInd+1:1:trainInd+valInd);
valTargets = xorTargets(:,trainInd+1:1:trainInd+valInd);
testInputs = xorInputs(:,trainInd+valInd+1:1:trainInd+valInd+testInd);
testTargets = xorTargets(:,trainInd+valInd+1:1:trainInd+valInd+testInd);
net = newff(xorInputs,xorTargets,2);
net.numLayers = 2;
net.divideFcn = 'divideind';
net.divideParam.trainInd=1:1:trainInd;
net.divideParam.valInd=trainInd+1:1:trainInd+valInd;
net.divideParam.testInd=trainInd+valInd+1:1:trainInd+valInd+testInd;
net.trainFcn = 'trainlm' ;
net.trainParam.show = inf;
Nepochs = 1;
net.trainParam.epochs = Nepochs;
net.trainParam.goal = 0;
net.trainParam.max_fail = 4;
net.trainParam.mem_reduc = 1;
net.trainParam.min_grad = 1e-10;
net.trainParam.mu = 0.001;
net.trainParam.mu_dec = 0.1;
net.trainParam.mu_inc = 10;
net.trainParam.mu_max = 1e10;
net.trainParam.time = inf;
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'tansig';
Nloops = 1; %max(Nloops) = (trainInd + valInd+ testInd)
for i = 1:Nloops
[net tr Y E] = train(net,xorInputs,xorTargets)
tr
net.IW{1,1}
net.LW{2,1}
net.b{1}
net.b{2}
x= xorInputs(:, i)
first_layer_output = tansig (net.IW{1,1}*x + net.b{1})
end
outputs = sim(net,testInputs)
tr =
trainFcn: 'trainlm'
trainParam: [1x1 struct]
performFcn: 'mse'
performParam: [1x1 struct]
divideFcn: 'divideind'
divideParam: [1x1 struct]
trainInd: [1 2 3 4]
valInd: [5 6]
testInd: [7 8]
stop: 'Maximum epoch reached.'
num_epochs: 1
best_epoch: 1
goal: 0
states: {1x8 cell}
epoch: [0 1]
time: [0.2870 0.3080]
perf: [1.1619 0.9912]
vperf: [1.4727 8.1911e-005]
tperf: [0.8510 1.9823]
mu: [1.0000e-003 1.0000e-004]
gradient: [1.2077 0.0295]
val_fail: [0 0]
If Nloops is greater than the input size, then its showing error because size(xorInputs)=[2,8].
And most of the time, simulation result is not giving the correct output (may be as epochs = 1)

Accedi per commentare.

Categorie

Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange

Tag

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by