When building a high-quality regression model, it is important to select the right features (or predictors), tune hyperparameters (model parameters not fit to the data), and assess model assumptions through residual diagnostics.
You can tune hyperparameters by iterating between choosing values for them, and cross-validating a model using your choices. This process yields multiple models, and the best model among them can be the one that minimizes the estimated generalization error. For example, to tune an SVM model, choose a set of box constraints and kernel scales, cross-validate a model for each pair of values, and then compare their 10-fold cross-validated mean-squared error estimates.
Certain nonparametric regression functions in Statistics and Machine Learning Toolbox™ additionally
offer automatic hyperparameter tuning through Bayesian optimization,
grid search, or random search. However,
which is the main function to implement Bayesian optimization, is
flexible enough for many other applications. For more details, see Bayesian Optimization Workflow.
|Regression Learner||Train regression models to predict data using supervised machine learning|
|Feature selection using neighborhood component analysis for regression|
|Predictor importance estimates by permutation of out-of-bag predictor observations for random forest of regression trees|
|Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots|
|Estimates of predictor importance for regression tree|
|Estimates of predictor importance for regression ensemble|
|Rank importance of predictors using ReliefF or RReliefF algorithm|
|Sequential feature selection using custom criterion|
|Fit linear regression model using stepwise regression|
|Create generalized linear regression model by stepwise regression|
|Confidence intervals of coefficient estimates of linear regression model|
|Linear hypothesis test on linear regression model coefficients|
|Durbin-Watson test with linear regression model object|
|Scatter plot or added variable plot of linear regression model|
|Added variable plot of linear regression model|
|Adjusted response plot of linear regression model|
|Plot observation diagnostics of linear regression model|
|Plot main effects of predictors in linear regression model|
|Plot interaction effects of two predictors in linear regression model|
|Plot residuals of linear regression model|
|Plot of slices through fitted linear regression surface|
|Confidence intervals of coefficient estimates of generalized linear model|
|Linear hypothesis test on generalized linear regression model coefficients|
|Analysis of deviance|
|Plot diagnostics of generalized linear regression model|
|Plot residuals of generalized linear regression model|
|Plot of slices through fitted generalized linear regression surface|
|Confidence intervals of coefficient estimates of nonlinear regression model|
|Linear hypothesis test on nonlinear regression model coefficients|
|Plot diagnostics of nonlinear regression model|
|Plot residuals of nonlinear regression model|
|Plot of slices through fitted nonlinear regression surface|
Workflow for training, comparing and improving regression models, including automated, manual, and parallel training.
In Regression Learner, automatically train a selection of models, or compare and tune options of linear regression models, regression trees, support vector machines, Gaussian process regression models, and ensembles of regression trees.
Identify useful predictors using plots, manually select features to include, and transform features using PCA in Regression Learner.
Compare model statistics and visualize results.
Learn about feature selection algorithms and explore the functions available for feature selection.
This topic introduces to sequential feature selection and provides an example that
selects features sequentially using a custom criterion and the
Neighborhood component analysis (NCA) is a non-parametric method for selecting features with the goal of maximizing prediction accuracy of regression and classification algorithms.
Perform feature selection that is robust to outliers using a custom robust loss function in NCA.
Select split-predictors for random forests using interaction test algorithm.
Perform Bayesian optimization using a fit function
or by calling
Create variables for Bayesian optimization.
Create the objective function for Bayesian optimization.
Set different types of constraints for Bayesian optimization.
Minimize cross-validation loss of a regression ensemble.
Visually monitor a Bayesian optimization.
Monitor a Bayesian optimization.
Understand the underlying algorithms for Bayesian optimization.
How Bayesian optimization works in parallel.
Speed up cross-validation using parallel computing.
Display and interpret linear regression output statistics.
Fit a linear regression model and examine the result.
Construct and analyze a linear regression model with interaction effects and interpret the results.
Evaluate a fitted model by using model properties and object functions.
In linear regression, the F-statistic is the test statistic for the analysis of variance (ANOVA) approach to test the significance of the model or the components in the model. The t-statistic is useful for making inferences about the regression coefficients.
Coefficient of determination (R-squared) indicates the proportionate amount of variation in the response variable y explained by the independent variables X in the linear regression model.
Estimated coefficient variances and covariances capture the precision of regression coefficient estimates.
Residuals are useful for detecting outlying y values and checking the linear regression assumptions with respect to the error term in the regression model.
The Durbin-Watson test assesses whether or not there is autocorrelation among the residuals of time series data.
Cook's distance is useful for identifying outliers in the X values (observations for predictor variables).
The hat matrix provides a measure of leverage.
Delete-1 change in covariance (
identifies the observations that are influential in the regression
Generalized linear models use linear methods to describe a potentially nonlinear relationship between predictor terms and a response variable.
Parametric nonlinear models represent the relationship between a continuous response variable and one or more continuous predictor variables.