Optimizing 1D CNN-Based Feature Selection
6 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hello everyone,
I used a 1D CNN on a matrix of size 50000 × 39 and achieved 90% accuracy. Subsequently, I applied feature selection using mutual information. I began with the selection of the 2nd feature (50000 × 1) and obtained 54% accuracy. I then selected the 13th feature (50000 × 2) and achieved 60% accuracy, and so on, until I had selected all 39 features. However, when I selected them in a disorderly manner, I did not achieve the initial accuracy of 90%. Is there a way to fix this and attain the same level of accuracy as the initial result?
Thanks in advance.
0 Commenti
Risposte (1)
Siraj
il 2 Nov 2023
Hi!
It is my understanding that you have trained a 1D CNN model on a 500x39 data matrix, where each data point has 39 features. Initially, when using all 39 features, you achieved an accuracy of 90%. However, when you applied feature selection by choosing features one by one, even after selecting all 39 features, you were unable to attain the same level of accuracy.
In my opinion, it is challenging to determine the exact order of features that will help you regain the initial accuracy. However, there are some common approaches you can consider. One option is to prioritize selecting the most important features first. Another approach is to apply dimensionality reduction techniques such as Principal Component Analysis (PCA) to reduce the feature space.
To select features based on a custom criterion, you can utilize the "sequentialfs" function. This function allows you to perform sequential feature selection and evaluate the impact of each feature on the model's performance.
Another useful function for feature selection is "rankfeatures." This function ranks the key features based on class separability criteria. By using "rankfeatures" you can identify the most discriminative features that contribute significantly to the classification task. This can help you select the most relevant features and potentially improve the accuracy of your model.
"relieff" is also a useful function for ranking the importance of predictors using the ReliefF or RReliefF algorithm. It helps identify the most influential features for improving model accuracy.
As an alternative to feature selection, Principal Component Analysis (PCA) can be applied to reduce the dimensionality of the data. By selecting a subset of principal components that retain the most important information, PCA can help improve the model's performance and potentially achieve the desired accuracy.
Hope this helps.
0 Commenti
Vedere anche
Categorie
Scopri di più su Dimensionality Reduction and Feature Extraction in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!