MATLAB Answers

Semantic Segmentation - How many layers to replace in transfer learning?

49 views (last 30 days)
awezmm on 1 May 2019
Commented: Guy Reading on 27 Sep 2019
Im doing semantic segmentation using Resnet-18 with Deeplab v3+ (
However, I want to retain on progressively harder tasks and want to use transfer learning. How many of the final layers should I be replacing? How do I figure out how many layers I have?
On the analyzeNetwork it says I have 101 but I am using Resnet - 18 which I thought had much less?


Show 1 older comment
Philip Li
Philip Li on 8 May 2019
Hi, I also can not find this file anywhere. Were you able to get it?
Tohru Kikawada
Tohru Kikawada on 11 May 2019
Philip, you'll need to try this on R2019a since the example has been revised to use DeepLab v3 instead of SegNet in the latest vresion.
Guy Reading
Guy Reading on 27 Sep 2019
I've been doing a bit of digging on the resnet18() <-> DeepLab V3+ connection, in this link MATLAB writes:
"This example creates the Deeplab v3+ network with weights initialized from a pre-trained Resnet-18 network"
But then we might ask, "how do we use the weights trained on one network to be used in another? Won't they be meaningless relative to Deeplab?
Then this link writes:
"The latest implementation of DeepLab supports multiple network backbones, like MobileNetv2, Xception, ResNet-v1, PNASNET and Auto-DeepLab."
So I guess we can treat Deeplab V3+ as some form of extension of resnet18 and thus can use the weights.

Sign in to comment.

Answers (2)

Tohru Kikawada
Tohru Kikawada on 2 May 2019
Did you see helperDeeplabv3PlusResnet18.m which is attached to the example as a supporting file? The supporting function might be helpful to create your own transfer learning network.


Show 1 older comment
Tohru Kikawada
Tohru Kikawada on 6 May 2019
Type the following command
then the helperDeeplabv3PlusResnet18.m can be found in your current folder.
awezmm on 7 May 2019
I have already looked at the helperDeeplabv3PlusResnet18.m and said in my previous comment that is want not helpful...

Sign in to comment.

Guy Reading
Guy Reading on 23 Sep 2019
Did you make any progress on this, @awezmm? I'm looking to do the same thing as you, with Resnet-18, too, and I got stuck at the same point as you so Googled & found your question here!
So far I've followed the (adapted for resnet) instructions of this tutorial:
So for resnet that'd be:
%% load a pre-trained CNN
pretrainedFolder = fullfile(tempdir,'pretrainedNetwork');
pretrainedNetwork = fullfile(pretrainedFolder,'deeplabv3plusResnet18CamVid.mat');
data = load(pretrainedNetwork);
net =;
layers = net.Layers
This shows me all 101 layers for resnet. For me, personally, I'd like to classify 2 things (background or object) so I've edited the final layer to give me 2 things, but I'm pretty sure I need to do more layers and unsure which ones:
%% Modify the network to use 2 categories
layers(101) = pixelClassificationLayer; % note, in the example he uses classificationLayer as it's not semantic seg
& now I'm stuck! I'll comment r.e. letting you know any progress I've made... I'm going to look into the structure of resnet more, now, to get a better understanding of what I need to change and how...


Show 2 older comments
awezmm on 23 Sep 2019
Hey Guy,
I think it's best to just replace the last layer only. Why are you replacing the other ones?
Guy Reading
Guy Reading on 24 Sep 2019
Oh right! So I looked to replace the layers which are specific to the amount of classes we want to categorise into. So layers 97:101 refer to layers which have a dimension set to 11, which was the original amount of classes:
Which is why I chose those. But I'm not 100% certain! Have you got this working for you at all? / what's the intuition behind picking the last layer, only?
Guy Reading
Guy Reading on 26 Sep 2019
To all that are reading: the above method worked for me & I'm starting to get labelled images from my model. I'm still not sure about which layers to freeze, if there are any suggestions with that I'd be interested to hear!

Sign in to comment.

Sign in to answer this question.

Translated by