Activations of freezed layers are different between before/after training, why?
2 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
ntinoson
il 29 Giu 2018
Commentato: Amanjit Dulai
il 28 Ago 2018
I follow the example "transfer-learning-using-googlenet" where, the last 3 layers ('loss3-classifier','prob','output') are replaced with 3 new ones. Then I 'freeze' the first 141 layers (that is up to and including 'pool5-drop_7x7_s1'):
layers(1:141) = freezeWeights(layers(1:141));
lgraph = createLgraphUsingConnections(layers,connections);
Then I follow fine-tuning.
Since 'pool5-7x7_s1' is BEFORE 'pool5-drop_7x7_s1', I would expect that the following two vectors were the same:
b_orig= activations(net_orig, I, 'pool5-7x7_s1');
b_tune= activations(net_tune, I, 'pool5-7x7_s1');
but they aren't!... Any idea why?
p.s. I also tried the activation of several other layers BEFORE 'pool5-drop_7x7_s1', and I got different vectors.... 'I' is an image, 'net_orig=googlenet;', and 'net_tune' is the resulting net after tuning.
2 Commenti
Risposta accettata
Amanjit Dulai
il 14 Ago 2018
The vectors are different because when you fine tune on a new dataset, the average image in "imageInputLayer" is recalculated for your new dataset.
2 Commenti
Più risposte (0)
Vedere anche
Categorie
Scopri di più su Image Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!