Recommendation for Machine Learning Interpretability options for a SeriesNetwork object?
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
Hello –
I have a trained algorithm (i.e., LSTM) for time-series regression that is a SeriesNetwork object:
SeriesNetwork with properties:
Layers: [6×1 nnet.cnn.layer.Layer]
InputNames: {'sequenceinput'}
OutputNames: {'regressionoutput'}
I have used some canned routines for machine learning interpretability (e.g., shapley, lime, plotPartialDependence) that work great with some object types (e.g., RegressionSVM) but not with SeriesNetwork objects. The relevant functions I have read about appear to be for use with image classification, e.g., rather than time-series regression.
My question is thus: Can you recommend a machine learning interpretability function for use with a SeriesNetwork object built for regression? I am confident such a function exists, but I can’t seem to find it. Any and all help would be greatly appreciated.
Thank you in advance.
0 Commenti
Risposte (1)
Shivansh
il 8 Nov 2023
Modificato: Shivansh
il 8 Nov 2023
Hi Bart,
I understand that you want to find a machine learning interpretability function for use with a SeriesNetwork object built for regression.
You can use “gradCam” function for time series models. You can refer to the following link for an example on classification model using time series.
The method is designed specifically for convolutional networks so it may not give good results for LSTMs.
Hope it helps!
0 Commenti
Vedere anche
Categorie
Scopri di più su Gaussian Process Regression in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!