Error using trainNetwork . Number of observations in X and Y disagree.
6 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Gowri Prasood
il 15 Lug 2023
Commentato: Katja Mogalle
il 5 Feb 2025 alle 21:23
Good Evening,
I'm trying to train a neural network but "Error using trainNetwork . Number of observations in X and Y disagree." error came out. I am not able to understand where is the problem.
XTrain is a 607x39x367x1 double and loadYTrain is 607x1 double (before training i changed to categorical also.).
can you please help me to resolve this?. its urgent.
Thank you all in advance.
Here is my code:
layers = [
imageInputLayer([39 367 1])
convolution2dLayer(3, 32, 'Padding', 'same')
reluLayer
convolution2dLayer(3, 64, 'Padding', 'same')
reluLayer
maxPooling2dLayer(2, 'Stride', 2)
dropoutLayer(0.2)
fullyConnectedLayer(128)
reluLayer
dropoutLayer(0.2)
fullyConnectedLayer(64)
reluLayer
fullyConnectedLayer(15)
softmaxLayer
classificationLayer];
% Set training options
options = trainingOptions('adam', ...
'MaxEpochs', 20, ...
'MiniBatchSize', 16, ...
'InitialLearnRate', 0.0001, ...
'ValidationData', {test_data, test_labels_categorical}, ...
'Plots', 'training-progress');
train_labels_categorical = categorical(train_labels);
% Train the model
trained_model = trainNetwork(train_data, train_labels_categorical, layers, options);
0 Commenti
Risposta accettata
Katja Mogalle
il 17 Lug 2023
Hello,
The Deep Learning Toolbox in MATLAB expects in-memory image data to be presented as a h-by-w-by-c-by-N numeric array, where h, w, and c are the height, width, and number of channels of the images, respectively, and N is the number of images.
If I understand your data correctly, your array is currently of size N-by-h-by-w-by-c. You can permute the training and test data to bring them in the correct format as follows:
train_data = permute(train_data,[2,3,4,1]);
I hope this helps.
Katja
6 Commenti
Katja Mogalle
il 18 Lug 2023
You mentioned initially, that "XTrain is a 607x39x367x1". Now you say that "x_train=607x14313". So I assume the data was somehow flattened to save it to file, but I don't know how exactly.
You'd need to reshape the data ack into a 4-D array which the convolutional neural network can interpret as height-by-width-by-channels-by-numObservations. But to to this, you need to figure out how the data was flattened. I suspect you'd need one of the following commands to unflatten the data:
x_train = reshape(x_train,[607,39,367,1])
or
x_train = reshape(x_train,[607,367,39,1])
Then, don't forget to permute the observation dimension into the fourth dimension as I showed earlier.
As you are using a 2D convolutional neural network, you have to decide which dimensions of your data represent the two "spatial" dimensions and which ones represents the "channel/feature" dimension. I suspect your data only has one dimension that could be interpreted as "space" (or time). So another option you could have a look at, are 1D CNNs or even recurrent networks. Here is an example showing speech emotion recognition based on gtcc coefficients and which uses a recurrent neural network: https://www.mathworks.com/help/audio/ug/speech-emotion-recognition.html
Più risposte (1)
ayan dutta
il 5 Feb 2025 alle 20:15
Hello,
I am having a similar problem. My network layers and training code are as follows. As mentioned in this post, I have also used permutaion. Howver, I am still getting the same error -- "Number of observations in X and Y disagree." Any idea why I am getting this error?
***************
whos X_reshaped
Name Size Bytes Class Attributes
X_reshaped 256x256x3x800 629145600 single
whos Y_train
Name Size Bytes Class Attributes
Y_train 800x1 1718 categorical
****************
layers = [
image3dInputLayer([width height 3], 'Name', 'input', 'Normalization', 'none') % Input layer for 3D images
convolution3dLayer(3, 32, 'Padding', 'same', 'Stride', 1, 'Name', 'conv1') % First 3D convolutional layer
batchNormalizationLayer('Name', 'bn1')
reluLayer('Name', 'relu1')
convolution3dLayer(3, 64, 'Padding', 'same', 'Stride', 1, 'Name', 'conv2') % Second 3D convolutional layer
batchNormalizationLayer('Name', 'bn2')
reluLayer('Name', 'relu2')
fullyConnectedLayer(128, 'Name', 'fc1') % Fully connected layer
reluLayer('Name', 'relu_fc1')
fullyConnectedLayer(numel(categories(Y_train)), 'Name', 'fc2') % Output layer with the number of classes
softmaxLayer('Name', 'softmax')
classificationLayer('Name', 'output') % Classification output layer
];
X_reshaped = permute(X_train, [2, 3, 4, 1]);
assert(size(X_reshaped, 4) == size(Y_train, 1), 'Number of samples in X and Y must match');
net = trainNetwork(X_reshaped,Y_train, layers, options);
1 Commento
Katja Mogalle
il 5 Feb 2025 alle 21:23
Hi.
Your data seems to have two "spatial" dimensions (i.e. height and width) and one channel dimension (with size 3, probably for RGB images, I assume).
However, you specified your network architecture to expect THREE "spatial" dimensions. Hence the software incorrectly interprets your channel dimension in the data as the third spatial dimension, and your channel dimension is suddenly 800 large, and the software thinks your only have one observation in the data.
In essence, you should use imageInputLayer and convolution2dLayer for your data and then you should not see any size mismatch errors anymore.
Hope that helps.
Vedere anche
Categorie
Scopri di più su Image Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!