Classification Learner App vs. Training and testing a model programmatically, Is there any hidden magical step in the classification learner app?

1 visualizzazione (ultimi 30 giorni)
I am trying to find a good model to explain my dataset. The problem is that I want to do leave-one-person-out cross validation which is not available in the App. So I trained different models (e.g. Tree, SVM, KNN, LDA) using functions like fitctee, fitcsvm, fitcknn, and fitcdiscr. Following the leave-one-person-out procedure I have found average classification accuracy of about 70% for the best model. However, when I use the App to model the data using 10-Fold cross validation, it has much better accuracy and TPR and TNR about 98%. This is really confusing that why this is happening! I was wondering if there are some steps I am missing when I do the modeling programmatically. Or is there any way to do what the App does by writing scripts and probably customizing the cross validation scheme to leave-one-person-out?

Risposte (1)

Stephan
Stephan il 16 Lug 2018
Modificato: Stephan il 16 Lug 2018
Hi,
A possible way to do this is working with the app and then, when you got a good result, export the code to matlab. This allows you to see the magic steps that are made and modify the code you got, if needed.
There is some more information here.
I could imagine that this procedure will solve your problem.
Best regards
Stephan
  6 Commenti
RZM
RZM il 16 Lug 2018
Thank you very much Stephan, I would rather to choose the one with higher performance but I am afraid if this K-Fold CV does not take inter-subject variability into account. In other words, I am not sure which one of these two CV approaches has higher power in terms of generalization. Regards

Accedi per commentare.

Prodotti


Release

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by