- The error occurs even if the operands being evaluated by the short-circuit || and && operators are empty or non-scalar arrays. You can use logical & and | operators for non-scalar operations. Now you can go through the script and check if this is caused by your script or not.
- It can also happen if the MATLAB path or the toolbox cache is corrupted. You can restore the settings by trying them out.
Training CNN with custom miniBatchDatastore thows cryptic error
3 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
I am trying to train a CNN model for a classification task. The input to the model is a `1x1024x2` frame and I have the following network:
layers =
28×1 Layer array with layers:
1 'Input Layer' Image Input 1×1024×2 images
2 'CNN1' 2-D Convolution 16 1×8 convolutions with stride [1 1] and padding 'same'
3 'BN1' Batch Normalization Batch normalization
4 'ReLU1' ReLU ReLU
5 'MaxPool1' 2-D Max Pooling 1×2 max pooling with stride [1 2] and padding [0 0 0 0]
6 'CNN2' 2-D Convolution 24 1×8 convolutions with stride [1 1] and padding 'same'
7 'BN2' Batch Normalization Batch normalization
8 'ReLU2' ReLU ReLU
9 'MaxPool2' 2-D Max Pooling 1×2 max pooling with stride [1 2] and padding [0 0 0 0]
10 'CNN3' 2-D Convolution 32 1×8 convolutions with stride [1 1] and padding 'same'
11 'BN3' Batch Normalization Batch normalization
12 'ReLU3' ReLU ReLU
13 'MaxPool3' 2-D Max Pooling 1×2 max pooling with stride [1 2] and padding [0 0 0 0]
14 'CNN4' 2-D Convolution 48 1×8 convolutions with stride [1 1] and padding 'same'
15 'BN4' Batch Normalization Batch normalization
16 'ReLU4' ReLU ReLU
17 'MaxPool4' 2-D Max Pooling 1×2 max pooling with stride [1 2] and padding [0 0 0 0]
18 'CNN5' 2-D Convolution 64 1×8 convolutions with stride [1 1] and padding 'same'
19 'BN5' Batch Normalization Batch normalization
20 'ReLU5' ReLU ReLU
21 'MaxPool5' 2-D Max Pooling 1×2 max pooling with stride [1 2] and padding [0 0 0 0]
22 'CNN6' 2-D Convolution 96 1×8 convolutions with stride [1 1] and padding 'same'
23 'BN6' Batch Normalization Batch normalization
24 'ReLU6' ReLU ReLU
25 'AP1' 2-D Average Pooling 1×32 average pooling with stride [1 1] and padding [0 0 0 0]
26 'FC1' Fully Connected 24 fully connected layer
27 'SoftMax' Softmax softmax
28 'Output' Classification Output crossentropyex
I'm using a custom minibatch dataset to train the model. The read funtion returns the following table structure:
i'm using these training options:
% Specify training options
options = trainingOptions('sgdm', ...
'MiniBatchSize', miniBatchSize, ...
'MaxEpochs', 10, ...
'Verbose', true, ...
'Plots', 'training-progress');
% Train the network
net = trainNetwork(mbds, layers, options);
and i get the following error:
Error using trainNetwork
Operands to the logical AND (&&) and OR (||) operators must be convertible to logical scalar values. Use the ANY or ALL functions to reduce operands to logical scalar values.
Caused by:
Operands to the logical AND (&&) and OR (||) operators must be convertible to logical scalar values. Use the ANY or ALL functions to reduce operands to logical scalar values.
I've been stuck on this for a while now, and theres no way to inspect the trainNetwork code to properly debug. I'd appreciate your suggestions. Thank you.
Paul Osinowo
Graduate Student
University of Strathclyde
0 Commenti
Risposte (1)
Karan Singh
il 25 Lug 2024
I have faced this error before. We can do one of two things, whichever works for you, as I don't have access to your full code:
>> restoredefaultpath
>> rehash toolboxcache
It worked for me and may be useful for you.
0 Commenti
Vedere anche
Categorie
Scopri di più su Image Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!