- Consider the complexity of the models. A smaller model that explains the data just as well as a larger model is generally preferred, as it is simpler and easier to interpret.
 - Look at other goodness-of-fit metrics, such as the ‘adjusted R-squared', 'AIC (Akaike Information Criterion)’, or ‘BIC (Bayesian Information Criterion)'. These metrics penalize more complex models, so a smaller model may perform better. The 'F-test' can also be used to test whether the larger model (v2) is significantly better than the smaller model (v1).
 - Conduct ‘cross-validation’ or use a hold-out dataset to test the performance of the models on new data. The model with better out-of-sample performance is generally preferred.
 
How to compare two nested models when they have very small R_squared diffrence.
    11 visualizzazioni (ultimi 30 giorni)
  
       Mostra commenti meno recenti
    
I have two models, ie v1 = a1 + a2*f + a3*f2 and v2 = k( a1 + a2*f a3*f^2) 
0 Commenti
Risposte (1)
  Rohit
    
 il 20 Mar 2023
        When comparing two nested models with very small differences in R-squared, it is important to consider other metrics and factors to determine which model is better.  
Here are some suggestions: 
0 Commenti
Vedere anche
Categorie
				Scopri di più su Linear and Nonlinear Regression in Help Center e File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!