Variations in LSTM Accuracy Due to Shuffled Feature Columns

Hello everyone, I applied LSTM to speech emotion recognition and achieved an accuracy of 42.1709%. However, when I shuffled the columns "features," the accuracy changed to 42.4925%. This variance is unexpected because I used the same data with only the columns shuffled. I attempted to use gpurng and rng to preserve the accuracy without success. Could someone please assist me? The code used is attached below. To shuffle the matrix, uncomment the lines: appp = appp(:, t(1:end)); testt = testt(:, t(1:end)).

10 Commenti

This is not something I know about; I do not have the resources to investigate it in detail.
Note: Mathworks Support Team rarely responds to user posts; when they do, it tends to be 3 or 4 years later.
would anyone help me please!
I don't have DLT, and I'm not generally familiar with the topic.
Did you know that the training process involves random numbers to initialize the weights? So it would be expected that changing something would give different outcomes.
@Walter Roberson yes you are right, but with rng we can fix the weights.
Suppose for illustration that with your rng() you generate column weights [3/4, 8/11, 2/9, 15/17] . You apply those to your original data, and some kind of estimate is made, and training proceeds like that.
Now suppose that with your rng() you generate the same column weights [3/4, 8/11, 2/9, 15/17], but you had exchanged your 3rd and 4th columns. So the 2/9 weight would be applied to what was originally column 4, and the 15/17 weight would be applied to what was originally your column 3. Of course the result of the calculation is not going to be the same as the original.
@Walter Roberson I totally agree with you. Is there a method to extract the weights, then shuffle them as shuffled data? I think this way will solve the issue!
I think this slight change in accuracy is to be expected.
When you freeze the weights, but shuffle the features relative to those weights, the optimisation process starts by activating the layers using different values. The weights and data are mapped differently once columns shuffled.
if possible, re-order the weights after creating the neural network, according to the way you shuffled the columns. This way the initial weights multiply by the same values in the dataset before begging the backpropogation

Accedi per commentare.

Risposte (0)

Richiesto:

il 11 Nov 2023

Modificato:

il 24 Nov 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by