- Try freezing the weights of the original layers by setting the WeightLearnRateFactor and BiasLearnRateFactor to zero for convolution2dLayer and the same WeightLearnRateFactor & BiasLearnRateFactor for the fullyConnectedLayer too.
- Or retrain the complete network without freezing weights of any particular layers.
Modifying pretraind Neural Network
5 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Shai Kendler
il 1 Apr 2020
Risposto: Srivardhan Gadila
il 8 Apr 2020
I plan to use a pretrained net such as alexnet, with input image of 227*227*5 . I exported the net to the Network Designer App. and changed the input and first convulution layers according to my requirements. I analyzed the archirecture and it seems perfect. Can I trust the new network to be a good starting point or I'm naive?
Thanks,
Shai
0 Commenti
Risposta accettata
Srivardhan Gadila
il 8 Apr 2020
Since the convolution2dLayer and imageInputLayer have been replaced, the output of the ImageInputLayer would be different now because initially for the zero-center nomalization the mean used was different and also the features extracted/output from the replaced convolution layer would be different and may not be useful. If you are training the network on the new dataset with image input size 227*227*5 then above all doesn't matter. Instead if you are using it for feature extraction & your data is very different from the original data, then the features extracted deeper in the network might be less useful for your task.
Here are few suggestions while retrianing:
0 Commenti
Più risposte (0)
Vedere anche
Categorie
Scopri di più su Image Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!