High Error on Neural Network Test Dataset vs. Training and Validation

4 visualizzazioni (ultimi 30 giorni)
Hey all,
I'm running into issues on my neural nets I haven't really seen before, and haven't been able to diagnose.
I have 3 datasets. A training set, a cross validation set, and a test set. I've recently added a significant amount of new features to my neural network to start training. This is where the issue started.
The Heart of the Issue: I'm getting varying degrees of overtraining but final test set accuracy is consistently 10%+ lower than the rest. I've been combatting this by varying my L2Regularization and Learning rates. Fair enough, but in ALMOST every case, my final accuracy on my test set (the data my network absolutely HASN'T seen), is consistently 10+% lower than even my validation accuracy. The best sweet spot i've found has been when all 3 accuracies (training, validation, and test) line up to be right around 68%.
Currently Varied Parameters in Bayesian Optimization:
Number of training epochs
Learn rate
L2Reg
Fully connected layer size
Number of hidden layers
Normalization methods
(Stopped experimenting with batch sizes as they had almost no effect in the past on accuracy for the range of 5-120 I tested)
I'm using an ADAMS solver. Are there other parameters you suggest I start varying? My algorithm is automatically varying the data with Bayesian Optimization on my hyperparameters.
Side note: I'm not training images. The basis of my network is a varying amount of fully connected layers

Risposte (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by