Classification Learner App vs. Training and testing a model programmatically, Is there any hidden magical step in the classification learner app?
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
I am trying to find a good model to explain my dataset. The problem is that I want to do leave-one-person-out cross validation which is not available in the App. So I trained different models (e.g. Tree, SVM, KNN, LDA) using functions like fitctee, fitcsvm, fitcknn, and fitcdiscr. Following the leave-one-person-out procedure I have found average classification accuracy of about 70% for the best model. However, when I use the App to model the data using 10-Fold cross validation, it has much better accuracy and TPR and TNR about 98%. This is really confusing that why this is happening! I was wondering if there are some steps I am missing when I do the modeling programmatically. Or is there any way to do what the App does by writing scripts and probably customizing the cross validation scheme to leave-one-person-out?
0 Commenti
Risposte (1)
Stephan
il 16 Lug 2018
Modificato: Stephan
il 16 Lug 2018
Hi,
A possible way to do this is working with the app and then, when you got a good result, export the code to matlab. This allows you to see the magic steps that are made and modify the code you got, if needed.
I could imagine that this procedure will solve your problem.
Best regards
Stephan
6 Commenti
Vedere anche
Categorie
Scopri di più su Discriminant Analysis in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!