How is predictor importance for classification trees calculated?
2 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
I am using MATLAB's function:
predictorImportance
to evaluate the usefulness of features I am extracting from 360° images.
I don't fully understand how predictor importance estimates are calculated and was hoping for a mathematical explanation for the algorithm used.
I have read the MATLAB documentation on this, however, I am unsure about a few things.
Firstly, what is risk? I have assumed it to be the impurity reduction if using the Gini index as the splitting criterion.
Secondly, what does "his sum is taken over best splits found at each branch node" when surrogate splits aren't used.
Finally, I don't understand why the estimates change when you reorder the columns in the feature matrix.
Thank you in advance to anyone able to shed light on this for me.
0 Commenti
Risposte (1)
Gaurav Garg
il 27 Gen 2021
Hi Ryan,
Yes, risk means impurity reduction if using the Gini index as the splitting criterion. You can also give 'twoing' or 'deviance' as split criterions by following the doc here.
To know about why the estimates change when you reorder columns, you can go through the doc here to understand the algorithm involved behind selections of nodes and splitting of each branch node.
Vedere anche
Categorie
Scopri di più su Classification Trees in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!