Classification Experiment Design - stopping training
Mostra commenti meno recenti
Hi,
I have a question (or several) regarding experiment design using matlab to conduct binary classification. I have a "design" set (for training and validation) and a separate test set to evaluate generalisation. My problem is when to stop training and apply the resulting net to my test set. From the nnet faq "Statisticians tend to be skeptical of stopped training because it appears to be statistically inefficient due to the use of the split-sample technique". So I use cross-validation by the following general method:
- for a given node value (H) I create a 100 random starting weight sets (S)
- for each (S) I randomly divide the design set into k equally sized, mutually exclusive subsets and train K nets using K(i) as the validation set and K-K(i) as the training set.
- Each net is trained to stop at mse_goal = 1e-6
- I evaluate validation error for each K(i) and retrain the relevant net to the number of epochs where validation error was lowest ?
- (do I need to do this or can I somehow select/return the net with the weights trained to this epoch from the [net tr] output) ?????
- I apply this net to my test set to evaluate generalisation
- the net from the set of S*K gives with the lowest generalisation error gives me the best trained net for my given H using my available data
Does this make sense ??
1 Commento
Greg Heath
il 20 Apr 2013
You failed to give 4 important values
1. N-size of data set
2. I-input dimensionality
3. O-output dimensionality
4. MSE00-mean target variance mean(var(target'))
Risposta accettata
Più risposte (0)
Categorie
Scopri di più su Deep Learning Toolbox in Centro assistenza e File Exchange
Prodotti
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!