In an assignment A(I) = B, the number of elements in B and I must be the same.

1 visualizzazione (ultimi 30 giorni)
Hi, does anybody know can I fix this error in line "e(i) = d - y2"
for i = 1: size(data_train,2)
x= data_train(:,i);
d = d_train(i); %Pattern mode is selected not Batch mode
% forward propagation
v1 = wh*x; %data entered
y1 = tanh(v1);
y1 = [-1;y1];
v2 = wo*y1;
y2 = (v2); %output and it should be linear active function
% backward propagation:
% adjust output synaptic weights
e(i) = d - y2; %Error calculation
delta_o = e(i) * (1); %bcz deriative of y=x is 1
wo = wo + etha*delta_o* y1'; %set new weight in output layer
% adjust hidden synaptic weights
delta_h = (1-y1.^2) .* (delta_o*wo)';
wh = wh + etha * delta_h(2:end)* x';
end
And in the work space y2 is 3x1 and d is 1x1
  2 Commenti
Fangjun Jiang
Fangjun Jiang il 4 Ago 2022
For things like this, put a break point on that line and run the code line by line. Check the value of every variable in that line. This basic debugging skill will help you resolve lots of errors.

Accedi per commentare.

Risposte (2)

James Tursa
James Tursa il 4 Ago 2022
Modificato: James Tursa il 4 Ago 2022
" And in the work space y2 is 3x1 and d is 1x1 "
Then d - y2 will be 3x1. You can't assign a 3-element vector to a 1x1 scalar which is what e(i) is.
If you could show an image of the math you are trying to use we can help you fix up the code.
  3 Commenti
Fatemeh Salar
Fatemeh Salar il 4 Ago 2022
Sure, here is what I was doing (its basically a regression data and i want to predict valuses in column 5 using multi layer perceptron- Neural Network)
load Power
group = Power(:,[5]); %last cell is the output
Totaldata = Power(:,[1:4]); %input
%% Normalization
mu = mean(Totaldata,2); %2: mean by column.. default is by row
sd = std(Totaldata')';
Totaldata = (Totaldata - repmat(mu,1,size(Totaldata,2))) ./ repmat(sd,1,size(Totaldata,2));
%% Adding Bias
Totaldata= [-ones(1,size(Totaldata,2));Totaldata];
%% Train data VS Test Data
div= 0.7;
num= round(div* size(Totaldata,2));
ind = 1:(size(Totaldata,2));
data_train = Totaldata(:,ind(1:num));
d_train = group(ind(1:num));
data_test = Totaldata(:,ind(num+1:end));
d_test = group(ind(num+1:end));
%% Implementing Neural Network
p= size(data_train,1);
nhid = 20; %neuron hidden layers
nout = size(d_train,1); %neuron output layer
wh = 0.1*randn(nhid,p); %weight hidden layers
wo = 0.1*randn(nout,nhid+1); %weight output layer
etha = 0.0001; %training rate
Nepoch = 100;
for n = 1: Nepoch
for i = 1: size(data_train,2)
x= data_train(:,i);
d = d_train(i); %Pattern mode is selected not Batch mode
% forward propagation
v1 = wh*x; %data entered
y1 = tanh(v1);
y1 = [-1;y1];
v2 = wo*y1;
y2 = (v2); %output and it should be linear active function
% backward propagation:
% adjust output synaptic weights
e(i) = d - y2; %Error calculation
delta_o = e(i) * (1); %bcz deriative of y=x is 1
wo = wo + etha*delta_o* y1'; %set new weight in output layer
% adjust hidden synaptic weights
delta_h = (1-y1.^2) .* (delta_o*wo)';
wh = wh + etha * delta_h(2:end)* x';
end
MSE(n) = mse(e);
% disp (['MSE(',num2str(n),'): ',num2str(MSE(n))])
end
James Tursa
James Tursa il 4 Ago 2022
@Fatemeh Salar There are ways to store vector or matrices in iterations, as others have pointed out, but we can't be sure this is what is needed. We would like to see a description (e.g., image) of the math you are trying to implement. Do you have anything like this you can post?

Accedi per commentare.


Walter Roberson
Walter Roberson il 4 Ago 2022
x= data_train(:,i);
That looks like a column of data, but we cannot tell how large it is
v1 = wh*x; %data entered
If wh is a scalar, then since x is a vector then v1 would be a vector. But we don't know -- wh could be a 2D array and x might be a scalar. v1 is probably a vector, not provably from this code.
y1 = tanh(v1);
same size as v1
y1 = [-1;y1];
That would fail if y1 is not a scalar or a column vector, so given we pass this line, we can figure that v1 is a scalar or vector. Either way, y1 would have to be a column vector after this line.
v2 = wo*y1;
we do not know what the size of wo is, but if it is not a scalar then likely the * operation would fail. So v1 is probably a column vector.
y2 = (v2); %output and it should be linear active function
Same size as v2, probably same size as y1
d = d_train(i); %Pattern mode is selected not Batch mode
scalar
e(i) = d - y2; %Error calculation
Right hand side is scalar minus (likely) column vector. Left hand side names a scalar location. Right hand side does not fit.
If you want to record all of the d-y2 values through the loop, then either assign into columns or else use a cell array.
  2 Commenti
Fatemeh Salar
Fatemeh Salar il 4 Ago 2022
Here is what I wrote before that code. NB: Power data contanis of 5 colums that each has 10000 value
load Power
group = Power(:,[5]); %last cell is the output
Totaldata = Power(:,[1:4]); %input
%% Normalization
mu = mean(Totaldata,2); %2: mean by column.. default is by row
sd = std(Totaldata')';
Totaldata = (Totaldata - repmat(mu,1,size(Totaldata,2))) ./ repmat(sd,1,size(Totaldata,2));
%% Adding Bias
Totaldata= [-ones(1,size(Totaldata,2));Totaldata];
%% Train data VS Test Data
div= 0.7;
num= round(div* size(Totaldata,2));
ind = 1:(size(Totaldata,2));
data_train = Totaldata(:,ind(1:num));
d_train = group(ind(1:num));
data_test = Totaldata(:,ind(num+1:end));
d_test = group(ind(num+1:end));
%% Implementing Neural Network
p= size(data_train,1);
nhid = 20; %neuron hidden layers
nout = size(d_train,1); %neuron output layer
wh = 0.1*randn(nhid,p); %weight hidden layers
wo = 0.1*randn(nout,nhid+1); %weight output layer
etha = 0.0001; %training rate
Walter Roberson
Walter Roberson il 4 Ago 2022
Regardless of whether y2 comes out as column vector or as 2D array, you cannot store it in a single location e(i)

Accedi per commentare.

Categorie

Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by