how to save the network training in the particular iteration
2 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
in a BPN training i need to save the network in the particular iteration and i load that when it needs and also begin the iteration when its stops how its possible sir
function Network = bbackprop(L,n,m,smse,X,D)
%%%%%VERIFICATION PHASE %%%%%
% determine number of input samples, desired output and their dimensions
[P,N] = size(X);
[Pd, M] = size (D);
% make user that each input vector has a corresponding desired output
if P ~= Pd
error('backprop:invalidTrainingAndDesired', ...
'The number of input vectors and desired ouput do not match');
end
% make sure that at least 3 layers have been specified and that the
% the dimensions of the specified input layer and output layer are
% equivalent to the dimensions of the input vectors and desired output
if length(L) < 3
error('backprop:invalidNetworkStructure','The network must have at least 3 layers');
else
if N ~= L(1) || M ~= L(end)
e = sprintf('Dimensions of input (%d) does not match input layer (%d)',N,L(1));
error('backprop:invalidLayerSize', e);
elseif M ~= L(end)
e = sprintf('Dimensions of output (%d) does not match output layer (%d)',M,L(end));
error('backprop:invalidLayerSize', e);
end
end
%%%%%INITIALIZATION PHASE %%%%%
nLayers = length(L); % we'll use the number of layers often
% randomize the weight matrices (uniform random values in [-1 1], there
% is a weight matrix between each layer of nodes. Each layer (exclusive the
% output layer) has a bias node whose activation is always 1, that is, the
% node function is C(net) = 1. Furthermore, there is a link from each node
% in layer i to the bias node in layer j (the last row of each matrix)
% because it is less computationally expensive then the alternative. The
% weights of all links to bias nodes are irrelevant and are defined as 0
w = cell(nLayers-1,1); % a weight matrix between each layer
for i=1:nLayers-2
w{i} = [1 - 2.*rand(L(i+1),L(i)+1) ; zeros(1,L(i)+1)];
end
w{end} = 1 - 2.*rand(L(end),L(end-1)+1);
% initialize stopping conditions mse = Inf; % assuming the intial weight matrices are bad epochs = 0;
a = cell(nLayers,1); % one activation matrix for each layer a{1} = [X ones(P,1)]; % a{1} is the input + '1' for the bias node activation % a{1} remains the same throught the computation for i=2:nLayers-1 a{i} = ones(P,L(i)+1); % inner layers include a bias node (P-by-Nodes+1) end a{end} = ones(P,L(end)); % no bias node at output layer
net = cell(nLayers-1,1); % one net matrix for each layer exclusive input for i=1:nLayers-2; net{i} = ones(P,L(i+1)+1); % affix bias node end net{end} = ones(P,L(end));
prev_dw = cell(nLayers-1,1); sum_dw = cell(nLayers-1,1); for i=1:nLayers-1 prev_dw{i} = zeros(size(w{i})); % prev_dw starts at 0 sum_dw{i} = zeros(size(w{i})); end
% loop until computational bounds are exceeded or the network has converged % to a satisfactory condition. We allow for 30000 epochs here, it may be % necessary to increase or decrease this bound depending on the number of % training while mse > smse %&& epochs < 30000 change been done by dinesh % FEEDFORWARD PHASE: calculate input/output off each layer for all samples for i=1:nLayers-1 net{i} = a{i} * w{i}'; % compute inputs to current layer
% compute activation(output of current layer, for all layers
% exclusive the output, the last node is the bias node and
% its activation is 1
if i < nLayers-1 % inner layers
a{i+1} = [2./(1+exp(-net{i}(:,1:end-1)))-1 ones(P,1)];
else % output layers
a{i+1} = 2 ./ (1 + exp(-net{i})) - 1;
end
end
% calculate sum squared error of all samples
err = (D-a{end}); % save this for later
sse = sum(sum(err.^2)); % sum of the error for all samples, and all nodes
delta = err .* (1 + a{end}) .* (1 - a{end});
for i=nLayers-1:-1:1
sum_dw{i} = n * delta' * a{i};
if i > 1
delta = (1+a{i}) .* (1-a{i}) .* (delta*w{i});
end
end
% update the prev_w, weight matrices, epoch count and mse
for i=1:nLayers-1
% we have the sum of the delta weights, divide through by the
% number of samples and add momentum * delta weight at (t-1)
% finally, update the weight matrices
prev_dw{i} = (sum_dw{i} ./ P) + (m * prev_dw{i});
w{i} = w{i} + prev_dw{i};
end
epochs = epochs + 1;
mse = sse/(P*M); % mse = 1/P * 1/M * summed squared error
%This is the line written by me
if mod(epochs,1000)==0
disp('MSE:');disp(mse);disp(' EPOCHS');disp(epochs);
end
end
% return the trained network Network.structure = L; Network.weights = w; Network.epochs = epochs; Network.mse = mse;
0 Commenti
Risposte (0)
Vedere anche
Categorie
Scopri di più su Build Deep Neural Networks in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!