Error for Deep Learning Unet
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
mohd akmal masud
il 25 Giu 2022
Dear all,
I was runmy coding..but got Error. The Data run as attached.
clc
clear all
close all
%testDataimages
DATASetDir = fullfile('C:\Users\Akmal\Desktop\NEW 3D U NET 128X128');
IMAGEDir = fullfile(DATASetDir,'ImagesTr');
volReader = @(x) matRead(x);
volds = imageDatastore(IMAGEDir, ...
'FileExtensions','.mat','ReadFcn',volReader);
% labelReader = @(x) matread(x);
matFileDir = fullfile('C:\Users\Akmal\Desktop\NEW 3D U NET 128X128\LabelsTr');
classNames = ["background", "tumor"];
pixelLabelID = [0 1];
% pxds = (LabelDirr,classNames,pixelLabelID, ...
% 'FileExtensions','.mat','ReadFcn',labelReader);
pxds = pixelLabelDatastore(matFileDir,classNames,pixelLabelID, ...
'FileExtensions','.mat','ReadFcn',@matRead);
ds = pixelLabelImageDatastore(volds,pxds);
volume = preview(volds);
label = preview(pxds);
patchSize = [128 128 64];
patchPerImage = 16;
miniBatchSize = 8;
patchds = randomPatchExtractionDatastore(volds,pxds,patchSize, ...
'PatchesPerImage',patchPerImage);
patchds.MiniBatchSize = miniBatchSize;
dsTrain = transform(patchds,@augment3dPatch);
volLocVal = fullfile('C:\Users\Akmal\Desktop\NEW 3D U NET 128X128\imagesVal');
voldsVal = imageDatastore(volLocVal, ...
'FileExtensions','.mat','ReadFcn',volReader);
lblLocVal = fullfile('C:\Users\Akmal\Desktop\NEW 3D U NET 128X128\labelsVal');
pxdsVal = pixelLabelDatastore(lblLocVal,classNames,pixelLabelID, ...
'FileExtensions','.mat','ReadFcn',volReader);
dsVal = randomPatchExtractionDatastore(voldsVal,pxdsVal,patchSize, ...
'PatchesPerImage',patchPerImage);
dsVal.MiniBatchSize = miniBatchSize;
tbl = countEachLabel(pxds)
totalNumberOfPixels = sum(tbl.PixelCount);
frequency = tbl.PixelCount / totalNumberOfPixels;
inverseFrequency = 1./frequency
layerf = pixelClassificationLayer(...
'Classes',tbl.Name,'ClassWeights',inverseFrequency)
layerf=pixelClassificationLayer("Name","Segmentation-Layer")
lgraph = layerGraph();
tempLayers = [
image3dInputLayer([128 128 64 1],"Name","image3dinput")
convolution3dLayer([3 3 3],64,"Name","Encoder-Stage-1-Conv-1","Padding","same")
reluLayer("Name","Encoder-Stage-1-ReLU-1_1")
convolution3dLayer([3 3 3],64,"Name","Encoder-Stage-1-Conv-2","Padding","same")
reluLayer("Name","Encoder-Stage-1-ReLU-1_2")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
maxPooling3dLayer([5 5 5],"Name","Encoder-Stage-1-MaxPool_1","Padding","same")
convolution3dLayer([3 3 3],128,"Name","Encoder-Stage-1-MaxPool_2","Padding","same")
reluLayer("Name","Encoder-Stage-2-ReLU-1")
convolution3dLayer([3 3 3],128,"Name","Encoder-Stage-2-Conv-2","Padding","same")
reluLayer("Name","Encoder-Stage-2-ReLU-2")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
maxPooling3dLayer([5 5 5],"Name","Encoder-Stage-2-MaxPool","Padding","same")
convolution3dLayer([3 3 3],256,"Name","Encoder-Stage-3-Conv-1","Padding","same")
reluLayer("Name","Encoder-Stage-3-ReLU-1")
convolution3dLayer([3 3 3],256,"Name","conv3d","Padding","same")
reluLayer("Name","Encoder-Stage-3-ReLU-2")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
dropoutLayer(0.5,"Name","Encoder-Stage-3-DropOut")
maxPooling3dLayer([5 5 5],"Name","Encoder-Stage-3-MaxPool","Padding","same")
convolution3dLayer([3 3 3],512,"Name","Bridge-Conv-1","Padding","same")
reluLayer("Name","Bridge-ReLU-1")
convolution3dLayer([3 3 3],512,"Name","Bridge-Conv-2","Padding","same")
reluLayer("Name","Bridge-ReLU-2")
dropoutLayer(0.5,"Name","Bridge-DropOut")
transposedConv3dLayer([5 5 5],256,"Name","Decoder-Stage-1-UpConv","Cropping","same")
reluLayer("Name","Decoder-Stage-1-UpReLU")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
depthConcatenationLayer(2,"Name","Decoder-Stage-1-DepthConcatenation")
convolution3dLayer([3 3 3],256,"Name","Decoder-Stage-1-Conv-1","Padding","same")
reluLayer("Name","Decoder-Stage-1-ReLU-1")
convolution3dLayer([3 3 3],256,"Name","Decoder-Stage-1-Conv-2","Padding","same")
reluLayer("Name","Decoder-Stage-1-ReLU-2")
transposedConv3dLayer([5 5 5],128,"Name","Decoder-Stage-2-UpConv","Cropping","same")
reluLayer("Name","Decoder-Stage-2-UpReLU")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
depthConcatenationLayer(2,"Name","Decoder-Stage-2-DepthConcatenation")
convolution3dLayer([3 3 3],128,"Name","Decoder-Stage-2-Conv-1","Padding","same")
reluLayer("Name","Decoder-Stage-2-ReLU-1")
convolution3dLayer([3 3 3],128,"Name","Decoder-Stage-2-Conv-2","Padding","same")
reluLayer("Name","Decoder-Stage-2-ReLU-2")
transposedConv3dLayer([5 5 5],64,"Name","Decoder-Stage-3-UpConv","Cropping","same")
reluLayer("Name","Decoder-Stage-3-UpReLU")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
depthConcatenationLayer(2,"Name","Decoder-Stage-3-DepthConcatenation")
convolution3dLayer([3 3 3],64,"Name","Decoder-Stage-3-Conv-1","Padding","same")
reluLayer("Name","Decoder-Stage-3-ReLU-1")
convolution3dLayer([3 3 3],64,"Name","Decoder-Stage-3-Conv-2","Padding","same")
reluLayer("Name","Decoder-Stage-3-ReLU-2")
convolution3dLayer([1 1 1],3,"Name","Final-ConvolutionLayer","Padding","same")
softmaxLayer("Name","softmax")
pixelClassificationLayer("Name","pixel-class")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"Encoder-Stage-1-ReLU-1_2","Encoder-Stage-1-MaxPool_1");
lgraph = connectLayers(lgraph,"Encoder-Stage-1-ReLU-1_2","Decoder-Stage-3-DepthConcatenation/in2");
lgraph = connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Encoder-Stage-2-MaxPool");
lgraph = connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Decoder-Stage-2-DepthConcatenation/in2");
lgraph = connectLayers(lgraph,"Encoder-Stage-3-ReLU-2","Encoder-Stage-3-DropOut");
lgraph = connectLayers(lgraph,"Encoder-Stage-3-ReLU-2","Decoder-Stage-1-DepthConcatenation/in1");
lgraph = connectLayers(lgraph,"Decoder-Stage-1-UpReLU","Decoder-Stage-1-DepthConcatenation/in2");
lgraph = connectLayers(lgraph,"Decoder-Stage-2-UpReLU","Decoder-Stage-2-DepthConcatenation/in1");
lgraph = connectLayers(lgraph,"Decoder-Stage-3-UpReLU","Decoder-Stage-3-DepthConcatenation/in1");
figure,plot(lgraph);
% inputSize = [64 64 64];
% numClasses = 2;
% encoderDepth = 3;
% lgraph = unet3dLayers(inputSize,numClasses,'EncoderDepth',encoderDepth)
maxEpochs = 10;
options = trainingOptions('adam', ...
'MaxEpochs',maxEpochs, ...
'InitialLearnRate',1e-3, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',5, ...
'LearnRateDropFactor',0.97, ...
'ValidationData',dsVal, ...
'ValidationFrequency',50, ...
'Plots','training-progress', ...
'Verbose',false, ...
'MiniBatchSize',miniBatchSize,...
'ExecutionEnvironment','cpu');
doTraining = true;
if doTraining
modelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');
[net,info] = trainNetwork(dsTrain,lgraph,options);
save(['trained3DUNet-' modelDateTime '-Epoch-' num2str(maxEpochs) '.mat'],'net');
else
load('trained3DVNet-07-Jun-2022-13-45-30-Epoch-250.mat');
end
ERROR
Error using trainNetwork
Invalid training data. The output size (3) of the last layer does not match the number of
classes of the responses (2).
I dont understand what is it does mean?
0 Commenti
Risposta accettata
Chandrika
il 26 Giu 2022
As per my understanding, the size of the output of your network, and the size of your label must be same. As the label IDs here have been set to [0 1], your labels must be grayscale images having only one color channel, but your patch size has been set to [128 128 64], whose size clearly doesn't seem to match with that of the labels. Hence, it is needed to change the size of one of them to match with the other. For e.g, if the image size is [128 128], the patch size should also be set to [128 128].
If this does not fix the issue, you can try changing the numbe of filters at the last convolutional layer to 1 instead of 3.
Hope this helps!
0 Commenti
Più risposte (0)
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!