in pool1_2, input size mismatch. size of input to this layer is different from the expected input size. Inputs to this layer: from the layer relu1_2 (1*1*64 output)
5 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Aiman Zara
il 3 Mag 2023
Commentato: Philip Brown
il 5 Mag 2023
I have used deep network designer, I am stuck with this error, please help me modify the code
layers = [
imageInputLayer([227 227 3],"Name","data")
convolution2dLayer([11 11],96,"Name","conv1","BiasLearnRateFactor",2,"Stride",[4 4])
reluLayer("Name","relu1")
crossChannelNormalizationLayer(5,"Name","norm1","K",1)
maxPooling2dLayer([3 3],"Name","pool1_1","Stride",[2 2])
groupedConvolution2dLayer([5 5],128,2,"Name","conv2","BiasLearnRateFactor",2,"Padding",[2 2 2 2])
reluLayer("Name","relu2")
crossChannelNormalizationLayer(5,"Name","norm2","K",1)
maxPooling2dLayer([3 3],"Name","pool2_1","Stride",[2 2])
convolution2dLayer([3 3],384,"Name","conv3","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu3")
groupedConvolution2dLayer([3 3],192,2,"Name","conv4","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu4")
groupedConvolution2dLayer([3 3],128,2,"Name","conv5","BiasLearnRateFactor",2,"Padding",[1 1 1 1])
reluLayer("Name","relu5")
maxPooling2dLayer([3 3],"Name","pool5_1","Stride",[2 2])
fullyConnectedLayer(4096,"Name","fc6_1","BiasLearnRateFactor",2)
reluLayer("Name","relu6_1")
dropoutLayer(0.5,"Name","drop6_1")
fullyConnectedLayer(4096,"Name","fc7_1","BiasLearnRateFactor",2)
reluLayer("Name","relu7_1")
dropoutLayer(0.5,"Name","drop7")
fullyConnectedLayer(1000,"Name","fc8_1","BiasLearnRateFactor",2)
convolution2dLayer([3 3],64,"Name","conv1_1","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu1_1")
convolution2dLayer([3 3],64,"Name","conv1_2","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu1_2")
maxPooling2dLayer([2 2],"Name","pool1_2","Stride",[2 2])
convolution2dLayer([3 3],128,"Name","conv2_1","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu2_1")
convolution2dLayer([3 3],128,"Name","conv2_2","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu2_2")
maxPooling2dLayer([2 2],"Name","pool2_2","Stride",[2 2])
convolution2dLayer([3 3],256,"Name","conv3_1","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu3_1")
convolution2dLayer([3 3],256,"Name","conv3_2","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu3_2")
convolution2dLayer([3 3],256,"Name","conv3_3","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu3_3")
maxPooling2dLayer([2 2],"Name","pool3","Stride",[2 2])
convolution2dLayer([3 3],512,"Name","conv4_1","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu4_1")
convolution2dLayer([3 3],512,"Name","conv4_2","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu4_2")
convolution2dLayer([3 3],512,"Name","conv4_3","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu4_3")
maxPooling2dLayer([2 2],"Name","pool4","Stride",[2 2])
convolution2dLayer([3 3],512,"Name","conv5_1","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu5_1")
convolution2dLayer([3 3],512,"Name","conv5_2","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu5_2")
convolution2dLayer([3 3],512,"Name","conv5_3","Padding",[1 1 1 1],"WeightL2Factor",0)
reluLayer("Name","relu5_3")
maxPooling2dLayer([2 2],"Name","pool5_2","Stride",[2 2])
fullyConnectedLayer(4096,"Name","fc6_2","WeightL2Factor",0)
reluLayer("Name","relu6_2")
dropoutLayer(0.5,"Name","drop6_2")
fullyConnectedLayer(4096,"Name","fc7_2","WeightL2Factor",0)
reluLayer("Name","relu7_2")
dropoutLayer(0.5,"Name","dropt7")
fullyConnectedLayer(9,"Name","fc8_2","WeightL2Factor",0)
softmaxLayer("Name","prob")
classificationLayer("Name","output")];
0 Commenti
Risposta accettata
Philip Brown
il 4 Mag 2023
If I call analyzeNetwork(layers), I see that on layer 28, pool1_2, you are trying to do a 2x2 max pooling operation, but the activation size of the previous layer is 1x1, so that's not possible.
I think that the network architecture is not correct. It's reducing the spatial size of your inputs too much. By layer 17 "fc6_1", you already have activations of size 1x1; you cannot do more downsampling operations after this. I believe you need to make edits to the layer parameters so that you are not reducing the spatial size of the activations so rapidly. Alternatively, reduce the depth of your network or increase the input size (note that will also increase memory usage and computation time).
2 Commenti
Philip Brown
il 5 Mag 2023
Is this a network you've adapted from elsewhere?
If you look at the Network Analyzer table, it'll tell you the size of activations propagating through the network. I think the architecture of the later part isn't going to be that helpful, if it's working with lots of 1x1 spatial sizes. You may want to remove the later layers, or update the earlier convolution and pooling layers (filter/pooling sizes, padding etc.) so that the activations remain larger than 1x1 to deeper parts of the network.
Più risposte (0)
Vedere anche
Categorie
Scopri di più su Image Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!