Classification Learner APP. Cross-validation, scatter plot and confusion matrix.
Mostra commenti meno recenti
I have a question regarding this app, hopefully some app-experts can help me :)
I read from the website: "If you use k-fold cross-validation, then the app computes the accuracy scores using the observations in the k validation folds and reports the average cross-validation error. It also makes predictions on the observations in these validation folds and computes the confusion matrix and ROC curve based on these predictions".
Ok for the accuracy but.. if you look at the confusion matrix generated after selecting "k-fold validation", you have integer values. How are they determined? It is not an average of the confusion matrices obtained by eack of the k validation folds... they are neither summed up, since the sum of all the elements corresponds with the number of the learning set trials provided... so?
The same for the scatter plot after training: you can notice correct and incorrect trials in the figure.. But are they considered correct/incorrect on the basis of the average results obtained in all the k validation folds? Or this depicts the classification obtained through only one representative fold?
Thanks in advance.
Risposta accettata
Più risposte (0)
Categorie
Scopri di più su Classification Learner App in Centro assistenza e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!

