Why are the results of forward and predict very different in deep learning?
Mostra commenti meno recenti
When I use the "dlnetwork" type deep neural network model to make predictions, the results of the two functions are very different, except that using the predict function will freeze the batchNormalizationLayer and dropout layers.While forward does not freeze the parameters, he is the forward transfer function used in the training phase.


From the two pictures above, there are orders of magnitude difference in the output of the previous 10 results. Where does the problem appear?
-------------------------Off-topic interlude, 2024-------------------------------
I am currently looking for a job in the field of CV algorithm development, based in Shenzhen, Guangdong, China,or a remote support position. I would be very grateful if anyone is willing to offer me a job or make a recommendation. My preliminary resume can be found at: https://cuixing158.github.io/about/ . Thank you!
Email: cuixingxing150@gmail.com
Risposta accettata
Più risposte (3)
vaibhav mishra
il 30 Giu 2020
0 voti
Hi there,
In my opinion you are using BatchNorm in training and not in testing, so how can you expect to get the same results from both. You need to use batchnorm in testing also with the same parameters as training.
1 Commento
xingxingcui
il 7 Lug 2020
Modificato: xingxingcui
il 7 Lug 2020
xingxingcui
il 12 Lug 2020
0 voti
Luc VIGNAUD
il 29 Giu 2021
0 voti
Thank you for raising this question. I did observe this issue playing with GANs and the difference comes indeed from the batchNorm. I ended using InstanceNorm instead but the question remains and should be answered by the matlab team ...
Categorie
Scopri di più su Deep Learning Toolbox in Centro assistenza e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!