training error in k-fold method

2 visualizzazioni (ultimi 30 giorni)
SeyedHossein HOSSEINI
SeyedHossein HOSSEINI il 8 Ott 2022
Modificato: Drew il 19 Gen 2023
H​i
I using Regression learner app in matlab and I​ want to use the k-fold method for validation. I set aside 15% of the data for the test (I randomly selected them), and for the remaining 85% of the data, I used 5-fold validation. The regression app learner gives me the Validation error, and when I enter those test data also it gives me a test error, but it doesn't have any menu or option for training error. I want to know how I can calculate training errors. is it ok that I predict those 85% data and calculate rMetrics and report errors as training errors?
  1 Commento
Image Analyst
Image Analyst il 8 Ott 2022
Modificato: Image Analyst il 8 Ott 2022
Can you attach your data so we can try the Regression Learner ourselves? Which model(s) did you try your data on?
If you have any more questions, then attach your data and code to read it in with the paperclip icon after you read this:

Accedi per commentare.

Risposte (1)

Drew
Drew il 19 Gen 2023
Modificato: Drew il 19 Gen 2023
The Regression Learner app does not show the error metrics on the training data using the final model. The answer at https://www.mathworks.com/matlabcentral/answers/1881227-question-on-regression-learner-app includes an example of how to get error metrics on the training data using the final model. In short, you can export the final model, then run prediction on the training data using the final model, then calculate the desired error metric. As an example of the calculation, if the final model is exported as trainedModel, and the training data is available in tbl_training, and the response is in tbl_training.Y, then this code calculates RMSE on the training data using the final model:
% Do prediction on the training set, using the final model
Y_training = trainedModel.predictFcn(tbl_training);
% Calculate RMSE on training set using final model
rmse_on_training_data = sqrt(mean((Y_training-tbl_training.Y).^2))
In general, the error metrics (such as RMSE) on the training data using the final model will be lower than error metrics on the validation data using the k-fold validation models, because testing the final model on the training data is "cheating" because the model training has seen the data being predicted. That is, when testing the final model on the training data, the same data is being used for training and testing, and thus the error rate on the training data is not a good estimate of the error rate one can expect to see on future new test data which was not seen during model training.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by