Crossvalidation of Neural Networks
2 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hi there!
I am a bit confused of how to use neural networks in cross validation. I want to use the command-line commands to find good parameters for a neural network to be able to predict correct classes based on my dataset.
My general structure would look like this: 1. determine parameters to test 2. perform crossvalidation: 2.1 split up my data in training and test set 2.2 train ANN with training set 2.3 see how well it performs on the test set
For 2.2 i would use "train", for 2.3 I would use "sim". I am now confused since "train" already uses validation and test vectors. Does my procedure make sense anyway? In 2.1 I would then try to use crossvalind and a for-loop to split the featrue matrix (rows: features, columns: observations) and the class-Matrix (nrOfClasses x nrOfObservations). sim outputs a matrix as well. For two classes thats 2 x nrOfObservations. Am I right so assume (couldnt really figure that out from the docu to sim) that the values are the probabilities for the single classes?
A few comforting words... ;-)
Thanks, Jay
0 Commenti
Risposte (1)
Mark Hudson Beale
il 9 Set 2011
I am not sure I understand your question but perhaps this will help.
If you want to divide your data set into a design set and your own validation set you can do that before training.
During training with TRAIN many of the training functions will divide up the design data into training (for optimizing the error gradient), validation (for measuring generalization in order to stop training at optimal generalization), and test sets (for an independent test).
As long as you have withheld your own validation set from the training function, it can be used as an additional independent test using SIM of the network beyond those that were selected for that role during training. You can provide all the validation vectors at once, as columns of an input matrix, or loop through each sample calling SIM with one column vector at a time. The results will be the same either way.
For PATTERNNET the outputs represent class probabilities, but as independent estimates which may not add to 1. If you need them to add to 1 you can either normalize the output by dividing by their sum, or change the output layer's transfer function to SOFTMAX before training:
net.layers{2}.transferFcn = 'softmax';
SOFTMAX ensures output vectors sum to 1.
0 Commenti
Vedere anche
Categorie
Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange
Prodotti
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!