# Automated Regression Model Selection with Bayesian and ASHA Optimization

This example shows how to use the `fitrauto` function to automatically try a selection of regression model types with different hyperparameter values, given training predictor and response data. By default, the function uses Bayesian optimization to select and assess models. If your training data set contains many observations, you can use an asynchronous successive halving algorithm (ASHA) instead. After the optimization is complete, `fitrauto` returns the model, trained on the entire data set, that is expected to best predict the responses for new data. Check the model performance on test data.

### Prepare Data

Load the sample data set `NYCHousing2015`, which includes 10 variables with information on the sales of properties in New York City in 2015. This example uses some of these variables to analyze the sale prices.

`load NYCHousing2015`

Instead of loading the sample data set `NYCHousing2015`, you can download the data from the NYC Open Data website and import the data as follows.

```folder = 'Annualized_Rolling_Sales_Update'; ds = spreadsheetDatastore(folder,"TextType","string","NumHeaderLines",4); ds.Files = ds.Files(contains(ds.Files,"2015")); ds.SelectedVariableNames = ["BOROUGH","NEIGHBORHOOD","BUILDINGCLASSCATEGORY","RESIDENTIALUNITS", ... "COMMERCIALUNITS","LANDSQUAREFEET","GROSSSQUAREFEET","YEARBUILT","SALEPRICE","SALEDATE"]; NYCHousing2015 = readall(ds); ```

Preprocess the data set to choose the predictor variables of interest. Some of the preprocessing steps match those in the example Train Linear Regression Model.

First, change the variable names to lowercase for readability.

`NYCHousing2015.Properties.VariableNames = lower(NYCHousing2015.Properties.VariableNames);`

Next, remove samples with certain problematic values. For example, retain only those samples where at least one of the area measurements `grosssquarefeet` or `landsquarefeet` is nonzero. Assume that a `saleprice` of \$0 indicates an ownership transfer without a cash consideration, and remove the samples with that `saleprice` value. Assume that a `yearbuilt` value of 1500 or less is a typo, and remove the corresponding samples.

```NYCHousing2015(NYCHousing2015.grosssquarefeet == 0 & NYCHousing2015.landsquarefeet == 0,:) = []; NYCHousing2015(NYCHousing2015.saleprice == 0,:) = []; NYCHousing2015(NYCHousing2015.yearbuilt <= 1500,:) = [];```

Convert the `saledate` variable, specified as a `datetime` array, into two numeric columns `MM` (month) and `DD` (day), and remove the `saledate` variable. Ignore the year values because all samples are for the year 2015.

```[~,NYCHousing2015.MM,NYCHousing2015.DD] = ymd(NYCHousing2015.saledate); NYCHousing2015.saledate = [];```

The numeric values in the `borough` variable indicate the names of the boroughs. Change the variable to a categorical variable using the names.

```NYCHousing2015.borough = categorical(NYCHousing2015.borough,1:5, ... ["Manhattan","Bronx","Brooklyn","Queens","Staten Island"]);```

The `neighborhood` variable has 254 categories. Remove this variable for simplicity.

`NYCHousing2015.neighborhood = [];`

Convert the `buildingclasscategory` variable to a categorical variable, and explore the variable by using the `wordcloud` function.

```NYCHousing2015.buildingclasscategory = categorical(NYCHousing2015.buildingclasscategory); wordcloud(NYCHousing2015.buildingclasscategory);```

Assume that you are interested only in one-, two-, and three-family dwellings. Find the sample indices for these dwellings and delete the other samples. Then, change the `buildingclasscategory` variable to an ordinal categorical variable, with integer-valued category names.

```idx = ismember(string(NYCHousing2015.buildingclasscategory), ... ["01 ONE FAMILY DWELLINGS","02 TWO FAMILY DWELLINGS","03 THREE FAMILY DWELLINGS"]); NYCHousing2015 = NYCHousing2015(idx,:); NYCHousing2015.buildingclasscategory = categorical(NYCHousing2015.buildingclasscategory, ... ["01 ONE FAMILY DWELLINGS","02 TWO FAMILY DWELLINGS","03 THREE FAMILY DWELLINGS"], ... ["1","2","3"],'Ordinal',true);```

The `buildingclasscategory` variable now indicates the number of families in one dwelling.

Explore the response variable `saleprice` by using the `summary` function.

```s = summary(NYCHousing2015); s.saleprice```
```ans = struct with fields: Size: [24972 1] Type: 'double' Description: '' Units: '' Continuity: [] Min: 1 Median: 515000 Max: 37000000 NumMissing: 0 ```

Create a histogram of the `saleprice` variable.

`histogram(NYCHousing2015.saleprice)`

Because the distribution of `saleprice` values is right-skewed, with all values greater than 0, log transform the `saleprice` variable.

`NYCHousing2015.saleprice = log(NYCHousing2015.saleprice);`

Similarly, transform the `grosssquarefeet` and `landsquarefeet` variables. Add a value of 1 before taking the logarithm of each variable, in case the variable is equal to 0.

```NYCHousing2015.grosssquarefeet = log(1 + NYCHousing2015.grosssquarefeet); NYCHousing2015.landsquarefeet = log(1 + NYCHousing2015.landsquarefeet);```

### Partition Data and Remove Outliers

Partition the data set into a training set and a test set by using `cvpartition`. Use approximately 80% of the observations for the model selection and hyperparameter tuning process, and the other 20% to test the performance of the final model returned by `fitrauto`.

```rng("default") % For reproducibility of the partition c = cvpartition(length(NYCHousing2015.saleprice),"Holdout",0.2); trainData = NYCHousing2015(training(c),:); testData = NYCHousing2015(test(c),:);```

Identify and remove the outliers of `saleprice`, `grosssquarefeet`, and `landsquarefeet` from the training data by using the `isoutlier` function.

```[priceIdx,priceL,priceU] = isoutlier(trainData.saleprice); trainData(priceIdx,:) = []; [grossIdx,grossL,grossU] = isoutlier(trainData.grosssquarefeet); trainData(grossIdx,:) = []; [landIdx,landL,landU] = isoutlier(trainData.landsquarefeet); trainData(landIdx,:) = [];```

Remove the outliers of `saleprice`, `grosssquarefeet`, and `landsquarefeet` from the test data by using the same lower and upper thresholds computed on the training data.

```testData(testData.saleprice < priceL | testData.saleprice > priceU,:) = []; testData(testData.grosssquarefeet < grossL | testData.grosssquarefeet > grossU,:) = []; testData(testData.landsquarefeet < landL | testData.landsquarefeet > landU,:) = [];```

### Use Automated Model Selection with Bayesian Optimization

Find an appropriate regression model for the data in `trainData` by using `fitrauto`. By default, `fitrauto` uses Bayesian optimization to select models and their hyperparameter values, and computes the $\mathrm{log}\left(1+valLoss\right)$ value for each model, where valLoss is the cross-validation mean squared error (MSE). `fitrauto` provides a plot of the optimization and an iterative display of the optimization results. For more information on how to interpret these results, see Verbose Display.

Specify to run the Bayesian optimization in parallel, which requires Parallel Computing Toolbox™. Due to the nonreproducibility of parallel timing, parallel Bayesian optimization does not necessarily yield reproducible results. Because of the complexity of the optimization, this process can take some time, especially for larger data sets.

```bayesianOptions = struct("UseParallel",true); [bayesianMdl,bayesianResults] = fitrauto(trainData,"saleprice", ... "HyperparameterOptimizationOptions",bayesianOptions);```
```Warning: Data set has more than 10000 observations. Because ASHA optimization often finds good solutions faster than Bayesian optimization for data sets with many observations, try specifying the 'Optimizer' field value as 'asha' in the 'HyperparameterOptimizationOptions' value structure. ```
```Copying objective function to workers... Done copying objective function to workers. Learner types to explore: ensemble, svm, tree Total iterations (MaxObjectiveEvaluations): 90 Total time (MaxTime): Inf |==========================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Estimated min | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | validation loss | | | |==========================================================================================================================================================| | 1 | 8 | Best | 0.25922 | 8.7966 | 0.25922 | 0.25922 | svm | BoxConstraint: 0.0055914 | | | | | | | | | | KernelScale: 0.0056086 | | | | | | | | | | Epsilon: 17.88 | | 2 | 7 | Accept | 0.19644 | 67.356 | 0.19314 | 0.19521 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 232 | | | | | | | | | | MinLeafSize: 8 | | 3 | 7 | Best | 0.19314 | 67.33 | 0.19314 | 0.19521 | svm | BoxConstraint: 529.96 | | | | | | | | | | KernelScale: 813.67 | | | | | | | | | | Epsilon: 0.0014318 | | 4 | 8 | Accept | 0.19662 | 75.495 | 0.19314 | 0.19521 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 271 | | | | | | | | | | MinLeafSize: 53 | | 5 | 8 | Best | 0.18769 | 79.998 | 0.18769 | 0.1877 | svm | BoxConstraint: 23.501 | | | | | | | | | | KernelScale: 37.99 | | | | | | | | | | Epsilon: 0.0072166 | | 6 | 8 | Accept | 0.20198 | 67.278 | 0.18769 | 0.1877 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 246 | | | | | | | | | | MinLeafSize: 1114 | | 7 | 8 | Accept | 0.20227 | 71.042 | 0.18769 | 0.1877 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 246 | | | | | | | | | | MinLeafSize: 1114 | | 8 | 8 | Accept | 0.29931 | 30.061 | 0.18769 | 0.1877 | tree | MinLeafSize: 2 | | 9 | 8 | Best | 0.18737 | 101.93 | 0.18737 | 0.1874 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 297 | | | | | | | | | | MinLeafSize: 3220 | | 10 | 8 | Accept | 0.25922 | 8.4803 | 0.18737 | 0.1874 | svm | BoxConstraint: 0.31228 | | | | | | | | | | KernelScale: 73.3 | | | | | | | | | | Epsilon: 2.1891 | | 11 | 8 | Accept | 0.25922 | 7.6613 | 0.18737 | 0.1874 | svm | BoxConstraint: 107.75 | | | | | | | | | | KernelScale: 414.93 | | | | | | | | | | Epsilon: 27.903 | | 12 | 8 | Accept | 0.19582 | 62.053 | 0.18737 | 0.18742 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 247 | | | | | | | | | | MinLeafSize: 4243 | | 13 | 8 | Accept | 0.18795 | 1.6154 | 0.18737 | 0.18742 | tree | MinLeafSize: 219 | | 14 | 8 | Best | 0.17764 | 256.31 | 0.17764 | 0.17767 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 275 | | | | | | | | | | MinLeafSize: 4 | | 15 | 8 | Accept | 0.1971 | 59.641 | 0.17764 | 0.17767 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 208 | | | | | | | | | | MinLeafSize: 210 | | 16 | 8 | Accept | 0.19855 | 1.8433 | 0.17764 | 0.17767 | tree | MinLeafSize: 895 | | 17 | 8 | Accept | 0.18966 | 78.082 | 0.17764 | 0.17767 | svm | BoxConstraint: 18.072 | | | | | | | | | | KernelScale: 48.632 | | | | | | | | | | Epsilon: 0.014558 | | 18 | 8 | Accept | 0.18558 | 1.0007 | 0.17764 | 0.17767 | tree | MinLeafSize: 81 | | 19 | 8 | Accept | 0.21098 | 3.0171 | 0.17764 | 0.17767 | tree | MinLeafSize: 12 | | 20 | 8 | Best | 0.17762 | 292.86 | 0.17762 | 0.17765 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 299 | | | | | | | | | | MinLeafSize: 161 | |==========================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Estimated min | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | validation loss | | | |==========================================================================================================================================================| | 21 | 8 | Accept | 0.23354 | 76.519 | 0.17762 | 0.17765 | svm | BoxConstraint: 0.0045714 | | | | | | | | | | KernelScale: 31.869 | | | | | | | | | | Epsilon: 0.0072361 | | 22 | 8 | Accept | 0.27791 | 16.397 | 0.17762 | 0.17765 | tree | MinLeafSize: 3 | | 23 | 8 | Accept | 0.20705 | 0.56716 | 0.17762 | 0.17765 | tree | MinLeafSize: 1381 | | 24 | 8 | Accept | 0.25951 | 8.5641 | 0.17762 | 0.17765 | tree | MinLeafSize: 4 | | 25 | 8 | Accept | 0.1853 | 103.97 | 0.17762 | 0.17765 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 218 | | | | | | | | | | MinLeafSize: 2260 | | 26 | 8 | Best | 0.17748 | 234.83 | 0.17748 | 0.17795 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 227 | | | | | | | | | | MinLeafSize: 161 | | 27 | 8 | Accept | 0.21866 | 47.523 | 0.17748 | 0.17756 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 239 | | | | | | | | | | MinLeafSize: 2731 | | 28 | 8 | Best | 0.17744 | 209.05 | 0.17744 | 0.17723 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 209 | | | | | | | | | | MinLeafSize: 12 | | 29 | 8 | Accept | 0.23155 | 5.0007 | 0.17744 | 0.17723 | tree | MinLeafSize: 7 | | 30 | 8 | Accept | 0.25922 | 9.2475 | 0.17744 | 0.17723 | svm | BoxConstraint: 404.64 | | | | | | | | | | KernelScale: 3.2648 | | | | | | | | | | Epsilon: 1.9718 | | 31 | 8 | Accept | 0.1856 | 223.47 | 0.17744 | 0.17723 | svm | BoxConstraint: 169.91 | | | | | | | | | | KernelScale: 27.071 | | | | | | | | | | Epsilon: 0.0098403 | | 32 | 8 | Accept | 0.23949 | 8.5208 | 0.17744 | 0.17723 | tree | MinLeafSize: 6 | | 33 | 8 | Accept | 0.25922 | 7.5558 | 0.17744 | 0.17723 | svm | BoxConstraint: 1.3089 | | | | | | | | | | KernelScale: 0.051591 | | | | | | | | | | Epsilon: 10.5 | | 34 | 8 | Accept | 0.29931 | 49.086 | 0.17744 | 0.17723 | tree | MinLeafSize: 2 | | 35 | 8 | Accept | 0.19293 | 2.0938 | 0.17744 | 0.17723 | tree | MinLeafSize: 421 | | 36 | 8 | Accept | 0.2433 | 44.756 | 0.17744 | 0.17745 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 213 | | | | | | | | | | MinLeafSize: 5333 | | 37 | 8 | Accept | 0.21113 | 0.58255 | 0.17744 | 0.17745 | tree | MinLeafSize: 2018 | | 38 | 8 | Accept | 0.178 | 196.2 | 0.17744 | 0.17745 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 200 | | | | | | | | | | MinLeafSize: 530 | | 39 | 8 | Accept | 0.25922 | 0.15808 | 0.17744 | 0.17745 | tree | MinLeafSize: 9074 | | 40 | 8 | Accept | 0.18727 | 1.3591 | 0.17744 | 0.17745 | tree | MinLeafSize: 46 | |==========================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Estimated min | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | validation loss | | | |==========================================================================================================================================================| | 41 | 8 | Accept | 0.18556 | 1.1831 | 0.17744 | 0.17745 | tree | MinLeafSize: 106 | | 42 | 8 | Accept | 0.18534 | 1.2318 | 0.17744 | 0.17745 | tree | MinLeafSize: 91 | | 43 | 8 | Accept | 0.18634 | 0.78251 | 0.17744 | 0.17745 | tree | MinLeafSize: 69 | | 44 | 8 | Accept | 0.18657 | 0.66041 | 0.17744 | 0.17745 | tree | MinLeafSize: 127 | | 45 | 8 | Accept | 0.1859 | 1.1918 | 0.17744 | 0.17745 | tree | MinLeafSize: 71 | | 46 | 8 | Accept | 0.19423 | 89.074 | 0.17744 | 0.17745 | svm | BoxConstraint: 111.04 | | | | | | | | | | KernelScale: 660.47 | | | | | | | | | | Epsilon: 0.011798 | | 47 | 8 | Accept | 0.18592 | 1.2115 | 0.17744 | 0.17745 | tree | MinLeafSize: 111 | | 48 | 8 | Accept | 0.18682 | 1.6234 | 0.17744 | 0.17745 | tree | MinLeafSize: 143 | | 49 | 8 | Best | 0.17736 | 276.94 | 0.17736 | 0.17735 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 254 | | | | | | | | | | MinLeafSize: 330 | | 50 | 8 | Accept | 0.18845 | 2.9137 | 0.17736 | 0.17735 | tree | MinLeafSize: 41 | | 51 | 8 | Accept | 0.18563 | 2.2093 | 0.17736 | 0.17735 | tree | MinLeafSize: 80 | | 52 | 8 | Accept | 0.18529 | 0.84567 | 0.17736 | 0.17735 | tree | MinLeafSize: 82 | | 53 | 8 | Accept | 0.18529 | 0.98317 | 0.17736 | 0.17735 | tree | MinLeafSize: 83 | | 54 | 8 | Accept | 0.19472 | 1.9906 | 0.17736 | 0.17735 | tree | MinLeafSize: 25 | | 55 | 8 | Accept | 0.22651 | 0.65124 | 0.17736 | 0.17735 | tree | MinLeafSize: 4236 | | 56 | 8 | Accept | 0.33688 | 103.3 | 0.17736 | 0.17735 | tree | MinLeafSize: 1 | | 57 | 8 | Accept | 0.18636 | 1.2646 | 0.17736 | 0.17735 | tree | MinLeafSize: 67 | | 58 | 8 | Best | 0.17725 | 212.81 | 0.17725 | 0.17725 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 221 | | | | | | | | | | MinLeafSize: 63 | | 59 | 8 | Accept | 0.18521 | 1.2055 | 0.17725 | 0.17725 | tree | MinLeafSize: 99 | | 60 | 8 | Accept | 0.18521 | 1.5858 | 0.17725 | 0.17725 | tree | MinLeafSize: 97 | |==========================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Estimated min | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | validation loss | | | |==========================================================================================================================================================| | 61 | 8 | Accept | 0.18545 | 1.5226 | 0.17725 | 0.17725 | tree | MinLeafSize: 96 | | 62 | 8 | Accept | 0.18547 | 0.87251 | 0.17725 | 0.17725 | tree | MinLeafSize: 95 | | 63 | 8 | Accept | 0.19011 | 1.0096 | 0.17725 | 0.17725 | tree | MinLeafSize: 291 | | 64 | 8 | Accept | 0.1949 | 1.1552 | 0.17725 | 0.17725 | tree | MinLeafSize: 598 | | 65 | 8 | Accept | 0.18745 | 1.2691 | 0.17725 | 0.17725 | tree | MinLeafSize: 175 | | 66 | 8 | Accept | 0.1867 | 1.1783 | 0.17725 | 0.17725 | tree | MinLeafSize: 56 | | 67 | 8 | Accept | 0.18534 | 1.4406 | 0.17725 | 0.17725 | tree | MinLeafSize: 91 | | 68 | 8 | Accept | 0.18592 | 1.183 | 0.17725 | 0.17725 | tree | MinLeafSize: 111 | | 69 | 8 | Accept | 0.18535 | 1.0641 | 0.17725 | 0.17725 | tree | MinLeafSize: 89 | | 70 | 8 | Accept | 0.18535 | 1.2021 | 0.17725 | 0.17725 | tree | MinLeafSize: 89 | | 71 | 8 | Accept | 0.19073 | 2.2491 | 0.17725 | 0.17725 | tree | MinLeafSize: 35 | | 72 | 8 | Accept | 0.18662 | 1.7733 | 0.17725 | 0.17725 | tree | MinLeafSize: 57 | | 73 | 8 | Accept | 0.18534 | 1.7077 | 0.17725 | 0.17725 | tree | MinLeafSize: 91 | | 74 | 8 | Accept | 0.17749 | 237.6 | 0.17725 | 0.17725 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 234 | | | | | | | | | | MinLeafSize: 291 | | 75 | 8 | Accept | 0.1854 | 1.3993 | 0.17725 | 0.17725 | tree | MinLeafSize: 93 | | 76 | 8 | Accept | 0.18516 | 2.5983 | 0.17725 | 0.17725 | tree | MinLeafSize: 85 | | 77 | 8 | Accept | 0.18519 | 1.0102 | 0.17725 | 0.17725 | tree | MinLeafSize: 100 | | 78 | 8 | Accept | 0.18518 | 0.85859 | 0.17725 | 0.17725 | tree | MinLeafSize: 87 | | 79 | 8 | Accept | 0.18545 | 0.74629 | 0.17725 | 0.17725 | tree | MinLeafSize: 96 | | 80 | 8 | Accept | 0.18516 | 0.93654 | 0.17725 | 0.17725 | tree | MinLeafSize: 84 | |==========================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Estimated min | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | validation loss | | | |==========================================================================================================================================================| | 81 | 8 | Accept | 0.18523 | 1.1649 | 0.17725 | 0.17725 | tree | MinLeafSize: 88 | | 82 | 8 | Accept | 0.18719 | 1.7177 | 0.17725 | 0.17725 | tree | MinLeafSize: 157 | | 83 | 8 | Accept | 0.18545 | 1.704 | 0.17725 | 0.17725 | tree | MinLeafSize: 96 | | 84 | 8 | Accept | 0.18529 | 0.95989 | 0.17725 | 0.17725 | tree | MinLeafSize: 82 | | 85 | 8 | Accept | 0.18535 | 0.95307 | 0.17725 | 0.17725 | tree | MinLeafSize: 89 | | 86 | 8 | Accept | 0.18596 | 1.1768 | 0.17725 | 0.17725 | tree | MinLeafSize: 110 | | 87 | 8 | Accept | 0.18518 | 1.3797 | 0.17725 | 0.17725 | tree | MinLeafSize: 86 | | 88 | 8 | Accept | 0.18535 | 0.89804 | 0.17725 | 0.17725 | tree | MinLeafSize: 89 | | 89 | 8 | Accept | 0.18572 | 303.75 | 0.17725 | 0.17725 | svm | BoxConstraint: 205.71 | | | | | | | | | | KernelScale: 26.184 | | | | | | | | | | Epsilon: 0.0010342 | | 90 | 8 | Accept | 0.18562 | 1.5575 | 0.17725 | 0.17725 | tree | MinLeafSize: 79 | ```

```__________________________________________________________ Optimization completed. Total iterations: 90 Total elapsed time: 940.6075 seconds Total time for training and validation: 3869.0047 seconds Best observed learner is an ensemble model with: Learner: ensemble Method: LSBoost NumLearningCycles: 221 MinLeafSize: 63 Observed log(1 + valLoss): 0.17725 Time for training and validation: 212.8107 seconds Best estimated learner (returned model) is an ensemble model with: Learner: ensemble Method: LSBoost NumLearningCycles: 221 MinLeafSize: 63 Estimated log(1 + valLoss): 0.17725 Estimated time for training and validation: 212.9539 seconds Documentation for fitrauto display ```

The `Total elapsed time` value shows that the Bayesian optimization took a while to run (about 16 minutes).

The final model returned by `fitrauto` corresponds to the best estimated learner. Before returning the model, the function retrains it using the entire training data set (`trainData`), the listed `Learner` (or model) type, and the displayed hyperparameter values.

### Use Automated Model Selection with ASHA Optimization

When `fitrauto` with Bayesian optimization takes a long time to run because of the number of observations in your training set, consider using `fitrauto` with ASHA optimization instead. Given that `trainData` contains over 10,000 observations, try using `fitrauto` with ASHA optimization to automatically find an appropriate regression model. When you use `fitrauto` with ASHA optimization, the function randomly chooses several models with different hyperparameter values and trains them on a small subset of the training data. If the $\mathrm{log}\left(1+valLoss\right)$ value for a particular model is promising, where valLoss is the cross-validation MSE, the model is promoted and trained on a larger amount of the training data. This process repeats, and successful models are trained on progressively larger amounts of data. By default, `fitrauto` provides a plot of the optimization and an iterative display of the optimization results. For more information on how to interpret these results, see Verbose Display.

Specify to run the ASHA optimization in parallel. Note that ASHA optimization often has more iterations than Bayesian optimization by default. If you have a time constraint, you can specify the `MaxTime` field of the `HyperparameterOptimizationOptions` structure to limit the number of seconds `fitrauto` runs.

```ashaOptions = struct("Optimizer","asha","UseParallel",true); [ashaMdl,ashaResults] = fitrauto(trainData,"saleprice", ... "HyperparameterOptimizationOptions",ashaOptions);```
```Copying objective function to workers... Done copying objective function to workers. Learner types to explore: ensemble, svm, tree Total iterations (MaxObjectiveEvaluations): 340 Total time (MaxTime): Inf |=======================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Training set | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | size | | | |=======================================================================================================================================================| | 1 | 7 | Error | NaN | 0.74354 | 0.25939 | 228 | svm | BoxConstraint: 0.75271 | | | | | | | | | | KernelScale: 11.791 | | | | | | | | | | Epsilon: 0.70708 | | 2 | 7 | Best | 0.25939 | 0.6809 | 0.25939 | 228 | svm | BoxConstraint: 322.3 | | | | | | | | | | KernelScale: 183.2 | | | | | | | | | | Epsilon: 18.839 | | 3 | 4 | Error | NaN | 1.3032 | 0.20407 | 228 | svm | BoxConstraint: 0.097665 | | | | | | | | | | KernelScale: 15.388 | | | | | | | | | | Epsilon: 0.0088338 | | 4 | 4 | Error | NaN | 0.99145 | 0.20407 | 228 | svm | BoxConstraint: 0.23529 | | | | | | | | | | KernelScale: 0.0053637 | | | | | | | | | | Epsilon: 0.19924 | | 5 | 4 | Error | NaN | 0.96507 | 0.20407 | 228 | svm | BoxConstraint: 0.22674 | | | | | | | | | | KernelScale: 80.959 | | | | | | | | | | Epsilon: 1.3516 | | 6 | 4 | Best | 0.20407 | 1.2793 | 0.20407 | 228 | tree | MinLeafSize: 7 | | 7 | 7 | Accept | 0.26031 | 0.20035 | 0.20407 | 228 | svm | BoxConstraint: 0.020147 | | | | | | | | | | KernelScale: 172.03 | | | | | | | | | | Epsilon: 23.989 | | 8 | 7 | Accept | 0.21268 | 0.5432 | 0.20407 | 228 | tree | MinLeafSize: 2 | | 9 | 8 | Best | 0.19076 | 1.3514 | 0.19076 | 910 | tree | MinLeafSize: 7 | | 10 | 8 | Accept | 0.20199 | 1.5091 | 0.19076 | 228 | svm | BoxConstraint: 0.0010751 | | | | | | | | | | KernelScale: 1.1093 | | | | | | | | | | Epsilon: 0.0079776 | | 11 | 8 | Accept | 0.25956 | 0.4022 | 0.19076 | 228 | tree | MinLeafSize: 6369 | | 12 | 8 | Accept | 0.1994 | 0.19641 | 0.19076 | 228 | tree | MinLeafSize: 20 | | 13 | 8 | Accept | 0.24111 | 11.229 | 0.19076 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 209 | | | | | | | | | | MinLeafSize: 95 | | 14 | 8 | Best | 0.19072 | 0.40043 | 0.19072 | 910 | tree | MinLeafSize: 20 | | 15 | 8 | Accept | 0.25943 | 0.18893 | 0.19072 | 228 | tree | MinLeafSize: 239 | | 16 | 8 | Accept | 0.25931 | 14.082 | 0.19072 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 234 | | | | | | | | | | MinLeafSize: 3498 | | 17 | 7 | Accept | 0.2316 | 22.039 | 0.19072 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 289 | | | | | | | | | | MinLeafSize: 65 | | 18 | 7 | Accept | 0.19145 | 21.99 | 0.19072 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 221 | | | | | | | | | | MinLeafSize: 2 | | 19 | 8 | Accept | 0.25944 | 20.756 | 0.19072 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 239 | | | | | | | | | | MinLeafSize: 4727 | | 20 | 8 | Accept | 0.2593 | 0.28174 | 0.19072 | 228 | svm | BoxConstraint: 235.91 | | | | | | | | | | KernelScale: 152.29 | | | | | | | | | | Epsilon: 18.94 | |=======================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Training set | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | size | | | |=======================================================================================================================================================| | 21 | 8 | Accept | 0.4238 | 16.204 | 0.19072 | 228 | svm | BoxConstraint: 159.02 | | | | | | | | | | KernelScale: 809.99 | | | | | | | | | | Epsilon: 0.037815 | | 22 | 7 | Accept | 0.19826 | 26.317 | 0.19072 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 260 | | | | | | | | | | MinLeafSize: 4 | | 23 | 7 | Accept | 0.25943 | 0.15328 | 0.19072 | 228 | tree | MinLeafSize: 469 | | 24 | 7 | Accept | 0.19506 | 21.139 | 0.19072 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 289 | | | | | | | | | | MinLeafSize: 2 | | 25 | 8 | Best | 0.18635 | 16.44 | 0.18635 | 910 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 221 | | | | | | | | | | MinLeafSize: 2 | | 26 | 8 | Accept | 0.20324 | 23.523 | 0.18635 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 293 | | | | | | | | | | MinLeafSize: 1 | | 27 | 8 | Accept | 0.2593 | 0.41755 | 0.18635 | 228 | svm | BoxConstraint: 71.635 | | | | | | | | | | KernelScale: 360.15 | | | | | | | | | | Epsilon: 1.6391 | | 28 | 8 | Accept | 0.1979 | 20.326 | 0.18635 | 910 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 260 | | | | | | | | | | MinLeafSize: 4 | | 29 | 8 | Best | 0.18429 | 27.503 | 0.18429 | 910 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 289 | | | | | | | | | | MinLeafSize: 2 | | 30 | 8 | Error | NaN | 0.85989 | 0.18429 | 228 | svm | BoxConstraint: 0.0015051 | | | | | | | | | | KernelScale: 153.62 | | | | | | | | | | Epsilon: 0.39629 | | 31 | 8 | Accept | 0.25996 | 0.33645 | 0.18429 | 228 | svm | BoxConstraint: 26.844 | | | | | | | | | | KernelScale: 0.0013803 | | | | | | | | | | Epsilon: 0.63605 | | 32 | 8 | Accept | 0.21217 | 0.65386 | 0.18429 | 228 | tree | MinLeafSize: 2 | | 33 | 8 | Error | NaN | 61.857 | 0.18429 | 228 | svm | BoxConstraint: 0.76664 | | | | | | | | | | KernelScale: 0.26621 | | | | | | | | | | Epsilon: 0.0062126 | | 34 | 8 | Accept | 0.2595 | 0.54994 | 0.18429 | 228 | tree | MinLeafSize: 452 | | 35 | 8 | Accept | 3.9362 | 72.511 | 0.18429 | 228 | svm | BoxConstraint: 0.16539 | | | | | | | | | | KernelScale: 0.10362 | | | | | | | | | | Epsilon: 0.0028173 | | 36 | 8 | Accept | 0.19261 | 2.1563 | 0.18429 | 910 | svm | BoxConstraint: 0.0010751 | | | | | | | | | | KernelScale: 1.1093 | | | | | | | | | | Epsilon: 0.0079776 | | 37 | 8 | Accept | 0.2592 | 0.17352 | 0.18429 | 228 | tree | MinLeafSize: 5784 | | 38 | 8 | Accept | 0.25932 | 0.33198 | 0.18429 | 228 | svm | BoxConstraint: 9.1472 | | | | | | | | | | KernelScale: 0.0014485 | | | | | | | | | | Epsilon: 0.013142 | | 39 | 8 | Accept | 4.0201 | 95.532 | 0.18429 | 228 | svm | BoxConstraint: 0.0034677 | | | | | | | | | | KernelScale: 0.024607 | | | | | | | | | | Epsilon: 0.51376 | | 40 | 8 | Accept | 0.25946 | 14.029 | 0.18429 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 233 | | | | | | | | | | MinLeafSize: 1217 | |=======================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Training set | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | size | | | |=======================================================================================================================================================| | 41 | 8 | Best | 0.17949 | 55.465 | 0.17949 | 3639 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 221 | | | | | | | | | | MinLeafSize: 2 | | 42 | 8 | Accept | 0.25919 | 0.47484 | 0.17949 | 228 | svm | BoxConstraint: 0.0012342 | | | | | | | | | | KernelScale: 1.9096 | | | | | | | | | | Epsilon: 10.912 | | 43 | 8 | Accept | 0.19872 | 16.731 | 0.17949 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 284 | | | | | | | | | | MinLeafSize: 5 | | 44 | 8 | Accept | 8.752 | 78.427 | 0.17949 | 228 | svm | BoxConstraint: 0.0038233 | | | | | | | | | | KernelScale: 0.1099 | | | | | | | | | | Epsilon: 0.021148 | | 45 | 8 | Accept | 0.25934 | 13.151 | 0.17949 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 291 | | | | | | | | | | MinLeafSize: 1016 | | 46 | 8 | Accept | 0.25921 | 10.067 | 0.17949 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 227 | | | | | | | | | | MinLeafSize: 8012 | | 47 | 8 | Error | NaN | 0.83663 | 0.17949 | 228 | svm | BoxConstraint: 2.8936 | | | | | | | | | | KernelScale: 7.6973 | | | | | | | | | | Epsilon: 0.010032 | | 48 | 8 | Error | NaN | 93.522 | 0.17949 | 228 | svm | BoxConstraint: 0.0057789 | | | | | | | | | | KernelScale: 0.024173 | | | | | | | | | | Epsilon: 0.0019218 | | 49 | 8 | Accept | 0.19661 | 26.107 | 0.17949 | 910 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 293 | | | | | | | | | | MinLeafSize: 1 | | 50 | 8 | Accept | 0.25921 | 0.27531 | 0.17949 | 228 | svm | BoxConstraint: 0.058053 | | | | | | | | | | KernelScale: 14.827 | | | | | | | | | | Epsilon: 13.791 | | 51 | 8 | Error | NaN | 0.59973 | 0.17949 | 228 | svm | BoxConstraint: 0.023521 | | | | | | | | | | KernelScale: 5.596 | | | | | | | | | | Epsilon: 0.0014762 | | 52 | 8 | Accept | 4.3906 | 99.781 | 0.17949 | 228 | svm | BoxConstraint: 96.756 | | | | | | | | | | KernelScale: 0.010139 | | | | | | | | | | Epsilon: 0.13254 | | 53 | 8 | Error | NaN | 2.0696 | 0.17949 | 228 | svm | BoxConstraint: 0.006626 | | | | | | | | | | KernelScale: 0.70401 | | | | | | | | | | Epsilon: 0.0054568 | | 54 | 8 | Accept | 0.25924 | 15.37 | 0.17949 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 290 | | | | | | | | | | MinLeafSize: 2231 | | 55 | 8 | Error | NaN | 0.31071 | 0.17949 | 228 | svm | BoxConstraint: 361.12 | | | | | | | | | | KernelScale: 52.988 | | | | | | | | | | Epsilon: 0.43709 | | 56 | 8 | Error | NaN | 2.0388 | 0.17949 | 228 | svm | BoxConstraint: 16.409 | | | | | | | | | | KernelScale: 3.8514 | | | | | | | | | | Epsilon: 0.023638 | | 57 | 8 | Accept | 0.20898 | 0.93287 | 0.17949 | 910 | tree | MinLeafSize: 2 | | 58 | 8 | Accept | 0.20038 | 0.35381 | 0.17949 | 228 | tree | MinLeafSize: 8 | | 59 | 8 | Accept | 0.25945 | 17.341 | 0.17949 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 273 | | | | | | | | | | MinLeafSize: 688 | | 60 | 8 | Error | NaN | 64.494 | 0.17949 | 228 | svm | BoxConstraint: 0.11582 | | | | | | | | | | KernelScale: 0.34549 | | | | | | | | | | Epsilon: 0.16015 | |=======================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Training set | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | size | | | |=======================================================================================================================================================| | 61 | 7 | Accept | 0.25938 | 11.039 | 0.17949 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 207 | | | | | | | | | | MinLeafSize: 2893 | | 62 | 7 | Accept | 0.22949 | 0.29853 | 0.17949 | 228 | tree | MinLeafSize: 77 | | 63 | 8 | Accept | 0.19119 | 0.70442 | 0.17949 | 910 | tree | MinLeafSize: 8 | | 64 | 8 | Accept | 0.18582 | 25.838 | 0.17949 | 910 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 284 | | | | | | | | | | MinLeafSize: 5 | | 65 | 8 | Accept | 0.21762 | 20.878 | 0.17949 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 202 | | | | | | | | | | MinLeafSize: 38 | | 66 | 8 | Error | NaN | 73.825 | 0.17949 | 228 | svm | BoxConstraint: 913.22 | | | | | | | | | | KernelScale: 0.38887 | | | | | | | | | | Epsilon: 0.15596 | | 67 | 8 | Accept | 0.25935 | 0.33883 | 0.17949 | 228 | tree | MinLeafSize: 150 | | 68 | 8 | Accept | 0.20006 | 23.908 | 0.17949 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 249 | | | | | | | | | | MinLeafSize: 2 | | 69 | 8 | Accept | 0.20364 | 33.513 | 0.17949 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 287 | | | | | | | | | | MinLeafSize: 15 | | 70 | 8 | Accept | 0.20016 | 20.232 | 0.17949 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 259 | | | | | | | | | | MinLeafSize: 6 | | 71 | 8 | Accept | 0.25946 | 0.16791 | 0.17949 | 228 | tree | MinLeafSize: 6893 | | 72 | 8 | Accept | 0.35187 | 0.63625 | 0.17949 | 228 | svm | BoxConstraint: 0.19105 | | | | | | | | | | KernelScale: 84.991 | | | | | | | | | | Epsilon: 0.073344 | | 73 | 8 | Accept | 0.20327 | 15.236 | 0.17949 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 211 | | | | | | | | | | MinLeafSize: 2 | | 74 | 8 | Error | NaN | 72.02 | 0.17949 | 228 | svm | BoxConstraint: 0.23518 | | | | | | | | | | KernelScale: 0.53603 | | | | | | | | | | Epsilon: 0.011066 | | 75 | 8 | Accept | 0.26049 | 0.33939 | 0.17949 | 228 | svm | BoxConstraint: 0.0013512 | | | | | | | | | | KernelScale: 0.0015726 | | | | | | | | | | Epsilon: 24.722 | | 76 | 8 | Error | NaN | 0.85688 | 0.17949 | 228 | svm | BoxConstraint: 843.32 | | | | | | | | | | KernelScale: 98.622 | | | | | | | | | | Epsilon: 0.0013207 | | 77 | 8 | Accept | 0.25939 | 0.24487 | 0.17949 | 228 | svm | BoxConstraint: 288.52 | | | | | | | | | | KernelScale: 0.0011806 | | | | | | | | | | Epsilon: 0.12918 | | 78 | 8 | Accept | 0.19746 | 21.241 | 0.17949 | 910 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 249 | | | | | | | | | | MinLeafSize: 2 | | 79 | 8 | Accept | 0.25967 | 0.36212 | 0.17949 | 228 | svm | BoxConstraint: 0.86126 | | | | | | | | | | KernelScale: 0.80732 | | | | | | | | | | Epsilon: 3.6131 | | 80 | 8 | Error | NaN | 30.648 | 0.17949 | 228 | svm | BoxConstraint: 0.014789 | | | | | | | | | | KernelScale: 10.262 | | | | | | | | | | Epsilon: 0.00053097 | |=======================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Training set | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | size | | | |=======================================================================================================================================================| | 81 | 8 | Best | 0.17835 | 69.425 | 0.17835 | 3639 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 289 | | | | | | | | | | MinLeafSize: 2 | | 82 | 8 | Error | NaN | 0.70287 | 0.17835 | 228 | svm | BoxConstraint: 0.044119 | | | | | | | | | | KernelScale: 725.24 | | | | | | | | | | Epsilon: 0.067068 | | 83 | 8 | Accept | 0.25922 | 0.20654 | 0.17835 | 228 | tree | MinLeafSize: 5151 | | 84 | 8 | Accept | 0.18422 | 21.378 | 0.17835 | 910 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 259 | | | | | | | | | | MinLeafSize: 6 | | 85 | 8 | Accept | 0.25956 | 15.603 | 0.17835 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 220 | | | | | | | | | | MinLeafSize: 398 | | 86 | 8 | Accept | 0.25925 | 16.649 | 0.17835 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 287 | | | | | | | | | | MinLeafSize: 3704 | | 87 | 8 | Accept | 0.19717 | 20.535 | 0.17835 | 910 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 211 | | | | | | | | | | MinLeafSize: 2 | | 88 | 8 | Accept | 0.25922 | 14.481 | 0.17835 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 215 | | | | | | | | | | MinLeafSize: 4480 | | 89 | 8 | Accept | 0.25923 | 0.31075 | 0.17835 | 228 | svm | BoxConstraint: 93.534 | | | | | | | | | | KernelScale: 0.0012628 | | | | | | | | | | Epsilon: 0.00070881 | | 90 | 8 | Error | NaN | 105.27 | 0.17835 | 228 | svm | BoxConstraint: 0.002754 | | | | | | | | | | KernelScale: 0.030396 | | | | | | | | | | Epsilon: 0.0049664 | | 91 | 8 | Accept | 0.38786 | 1.3545 | 0.17835 | 228 | svm | BoxConstraint: 59.578 | | | | | | | | | | KernelScale: 7.0125 | | | | | | | | | | Epsilon: 0.048114 | | 92 | 8 | Error | NaN | 20.814 | 0.17835 | 228 | svm | BoxConstraint: 16.856 | | | | | | | | | | KernelScale: 0.0069656 | | | | | | | | | | Epsilon: 0.00079872 | | 93 | 7 | Accept | 0.25921 | 16.582 | 0.17835 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 275 | | | | | | | | | | MinLeafSize: 779 | | 94 | 7 | Accept | 0.2592 | 0.15883 | 0.17835 | 228 | tree | MinLeafSize: 5053 | | 95 | 7 | Accept | 0.29146 | 1.0903 | 0.17835 | 228 | svm | BoxConstraint: 0.0029396 | | | | | | | | | | KernelScale: 35.64 | | | | | | | | | | Epsilon: 0.0034305 | | 96 | 8 | Accept | 0.41923 | 0.56162 | 0.17835 | 228 | svm | BoxConstraint: 0.034261 | | | | | | | | | | KernelScale: 9.1273 | | | | | | | | | | Epsilon: 0.04355 | | 97 | 8 | Accept | 0.20525 | 0.70228 | 0.17835 | 910 | tree | MinLeafSize: 2 | | 98 | 8 | Accept | 0.20139 | 0.2252 | 0.17835 | 228 | tree | MinLeafSize: 12 | | 99 | 8 | Accept | 0.25923 | 0.21183 | 0.17835 | 228 | svm | BoxConstraint: 0.076547 | | | | | | | | | | KernelScale: 1.3896 | | | | | | | | | | Epsilon: 5.7928 | | 100 | 8 | Error | NaN | 1.1784 | 0.17835 | 228 | svm | BoxConstraint: 103.69 | | | | | | | | | | KernelScale: 380.67 | | | | | | | | | | Epsilon: 0.023201 | |=======================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Training set | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | size | | | |=======================================================================================================================================================| | 101 | 8 | Accept | 0.44687 | 0.86774 | 0.17835 | 228 | svm | BoxConstraint: 0.011037 | | | | | | | | | | KernelScale: 464.93 | | | | | | | | | | Epsilon: 0.01088 | | 102 | 8 | Accept | 0.19127 | 0.46502 | 0.17835 | 910 | tree | MinLeafSize: 12 | | 103 | 8 | Accept | 3.9177 | 105.84 | 0.17835 | 228 | svm | BoxConstraint: 0.18091 | | | | | | | | | | KernelScale: 0.0093375 | | | | | | | | | | Epsilon: 0.0046786 | | 104 | 8 | Error | NaN | 0.33372 | 0.17835 | 228 | svm | BoxConstraint: 0.3297 | | | | | | | | | | KernelScale: 60.67 | | | | | | | | | | Epsilon: 1.522 | | 105 | 8 | Accept | 0.21268 | 0.30804 | 0.17835 | 228 | tree | MinLeafSize: 46 | | 106 | 8 | Accept | 0.19508 | 0.63479 | 0.17835 | 228 | svm | BoxConstraint: 141.35 | | | | | | | | | | KernelScale: 51.798 | | | | | | | | | | Epsilon: 0.0064846 | | 107 | 8 | Accept | 0.25922 | 0.28154 | 0.17835 | 228 | svm | BoxConstraint: 111.07 | | | | | | | | | | KernelScale: 0.010862 | | | | | | | | | | Epsilon: 2.691 | | 108 | 8 | Accept | 0.2592 | 13.479 | 0.17835 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 289 | | | | | | | | | | MinLeafSize: 163 | | 109 | 7 | Accept | 0.19161 | 1.6643 | 0.17835 | 910 | svm | BoxConstraint: 141.35 | | | | | | | | | | KernelScale: 51.798 | | | | | | | | | | Epsilon: 0.0064846 | | 110 | 7 | Accept | 0.25926 | 0.23349 | 0.17835 | 228 | svm | BoxConstraint: 0.0014645 | | | | | | | | | | KernelScale: 0.37849 | | | | | | | | | | Epsilon: 2.0091 | | 111 | 8 | Accept | 0.25923 | 0.21702 | 0.17835 | 228 | svm | BoxConstraint: 46.088 | | | | | | | | | | KernelScale: 0.0015015 | | | | | | | | | | Epsilon: 0.30073 | | 112 | 8 | Accept | 0.19687 | 25.947 | 0.17835 | 910 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 287 | | | | | | | | | | MinLeafSize: 15 | | 113 | 8 | Accept | 0.17871 | 51.27 | 0.17835 | 3639 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 259 | | | | | | | | | | MinLeafSize: 6 | | 114 | 8 | Accept | 0.20081 | 17.879 | 0.17835 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 278 | | | | | | | | | | MinLeafSize: 2 | | 115 | 8 | Accept | 0.20322 | 17.346 | 0.17835 | 228 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 255 | | | | | | | | | | MinLeafSize: 4 | | 116 | 8 | Error | NaN | 0.95447 | 0.17835 | 228 | svm | BoxConstraint: 2.9117 | | | | | | | | | | KernelScale: 16.756 | | | | | | | | | | Epsilon: 0.0023456 | | 117 | 8 | Accept | 0.19387 | 13.294 | 0.17835 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 215 | | | | | | | | | | MinLeafSize: 1 | | 118 | 8 | Accept | 0.19425 | 15.035 | 0.17835 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 212 | | | | | | | | | | MinLeafSize: 8 | | 119 | 8 | Accept | 0.25924 | 0.37346 | 0.17835 | 228 | svm | BoxConstraint: 0.0209 | | | | | | | | | | KernelScale: 9.3689 | | | | | | | | | | Epsilon: 26.54 | | 120 | 8 | Accept | 0.26066 | 0.27477 | 0.17835 | 228 | tree | MinLeafSize: 272 | |=======================================================================================================================================================| | Iter | Active | Eval | log(1+valLoss)| Time for training | Observed min | Training set | Learner | Hyperparameter: Value | | | workers | result | | & validation (sec)| validation loss | size | | | |=======================================================================================================================================================| | 121 | 8 | Accept | 0.19484 | 18.004 | 0.17835 | 228 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 261 | | | | | | | | | | MinLeafSize: 14 | | 122 | 8 | Accept | 0.2592 | 0.23541 | 0.17835 | 228 | tree | MinLeafSize: 133 | | 123 | 8 | Accept | 0.25921 | 0.29134 | 0.17835 | 228 | svm | BoxConstraint: 1.5995 | | | | | | | | | | KernelScale: 2.8676 | | | | | | | | | | Epsilon: 15.471 | | 124 | 8 | Accept | 0.23223 | 0.43343 | 0.17835 | 228 | tree | MinLeafSize: 1 | | 125 | 8 | Accept | 0.25972 | 0.203 | 0.17835 | 228 | svm | BoxConstraint: 0.0086335 | | | | | | | | | | KernelScale: 400.4 | | | | | | | | | | Epsilon: 2.0501 | | 126 | 8 | Error | NaN | 0.2949 | 0.17835 | 228 | svm | BoxConstraint: 7.4426 | | | | | | | | | | KernelScale: 0.002509 | | | | | | | | | | Epsilon: 0.0026332 | | 127 | 8 | Accept | 0.26011 | 0.29631 | 0.17835 | 228 | svm | BoxConstraint: 0.11427 | | | | | | | | | | KernelScale: 567.97 | | | | | | | | | | Epsilon: 17.13 | | 128 | 8 | Accept | 0.25923 | 0.32762 | 0.17835 | 228 | svm | BoxConstraint: 83.085 | | | | | | | | | | KernelScale: 0.0012722 | | | | | | | | | | Epsilon: 0.0023782 | | 129 | 8 | Accept | 0.19582 | 20.926 | 0.17835 | 910 | ensemble | Method: Bag | | | | | | | | | | NumLearningCycles: 278 | | | | | | | | | | MinLeafSize: 2 | | 130 | 8 | Accept | 0.21135 | 0.35596 | 0.17835 | 228 | tree | MinLeafSize: 3 | | 131 | 8 | Error | NaN | 87.153 | 0.17835 | 228 | svm | BoxConstraint: 358.5 | | | | | | | | | | KernelScale: 0.081127 | | | | | | | | | | Epsilon: 0.002852 | | 132 | 8 | Accept | 0.25922 | 0.18321 | 0.17835 | 228 | tree | MinLeafSize: 4593 | | 133 | 7 | Error | NaN | 0.80608 | 0.17835 | 228 | svm | BoxConstraint: 0.0082359 | | | | | | | | | | KernelScale: 64.836 | | | | | | | | | | Epsilon: 0.25191 | | 134 | 7 | Accept | 0.2592 | 0.1831 | 0.17835 | 228 | svm | BoxConstraint: 0.029216 | | | | | | | | | | KernelScale: 8.6693 | | | | | | | | | | Epsilon: 14.283 | | 135 | 8 | Accept | 0.21864 | 0.42231 | 0.17835 | 228 | tree | MinLeafSize: 66 | | 136 | 8 | Accept | 4.0359 | 106.74 | 0.17835 | 228 | svm | BoxConstraint: 97.5 | | | | | | | | | | KernelScale: 0.013998 | | | | | | | | | | Epsilon: 0.04939 | | 137 | 8 | Accept | 0.1864 | 18.298 | 0.17835 | 910 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 215 | | | | | | | | | | MinLeafSize: 1 | | 138 | 8 | Error | NaN | 0.93006 | 0.17835 | 228 | svm | BoxConstraint: 0.0092347 | | | | | | | | | | KernelScale: 496.16 | | | | | | | | | | Epsilon: 0.11821 | | 139 | 8 | Accept | 0.18463 | 21.544 | 0.17835 | 910 | ensemble | Method: LSBoost | | | | | | | | | | NumLearningCycles: 212 | | | | | | | | | | MinLeafSize: 8 | | 140 | 8 | Error | NaN | 4.9749 | 0.17835 | 228 | svm | BoxConstraint: 0.24317 | | | | | ... ```

```__________________________________________________________ Optimization completed. Total iterations: 340 Total elapsed time: 725.1069 seconds Total time for training and validation: 5193.6688 seconds Best observed learner is an ensemble model with: Learner: ensemble Method: LSBoost NumLearningCycles: 289 MinLeafSize: 2 Observed log(1 + valLoss): 0.17753 Time for training and validation: 295.4965 seconds Documentation for fitrauto display ```

The `Total elapsed time` value shows that the ASHA optimization took less time to run than the Bayesian optimization (about 12 minutes).

The final model returned by `fitrauto` corresponds to the best observed learner. Before returning the model, the function retrains it using the entire training data set (`trainData`), the listed `Learner` (or model) type, and the displayed hyperparameter values.

### Evaluate Test Set Performance

Evaluate the performance of the returned `bayesianMdl` and `ashaMdl` models on the test set `testData`. For each model, compute the test set mean squared error (MSE), and take a log transform of the MSE to match the values in the verbose display of `fitrauto`. Smaller MSE (and log-transformed MSE) values indicate better performance.

```bayesianTestMSE = loss(bayesianMdl,testData,"saleprice"); bayesianTestError = log(1 + bayesianTestMSE)```
```bayesianTestError = 0.1782 ```
```ashaTestMSE = loss(ashaMdl,testData,"saleprice"); ashaTestError = log(1 + ashaTestMSE)```
```ashaTestError = 0.1795 ```

For each model, compare the predicted test set response values to the true response values. Plot the predicted sale price along the vertical axis and the true sale price along the horizontal axis. Points on the reference line indicate correct predictions. A good model produces predictions that are scattered near the line. Use a 1-by-2 tiled layout to compare the results for the two models.

```bayesianTestPredictions = predict(bayesianMdl,testData); ashaTestPredictions = predict(ashaMdl,testData); tiledlayout(1,2) nexttile plot(testData.saleprice,bayesianTestPredictions,".") hold on plot(testData.saleprice,testData.saleprice) % Reference line hold off xlabel(["True Sale Price","(log transformed)"]) ylabel(["Predicted Sale Price","(log transformed)"]) title("Bayesian Optimization Model") nexttile plot(testData.saleprice,ashaTestPredictions,".") hold on plot(testData.saleprice,testData.saleprice) % Reference line hold off xlabel(["True Sale Price","(log transformed)"]) ylabel(["Predicted Sale Price","(log transformed)"]) title("ASHA Optimization Model")```

Based on the log-transformed MSE values and the prediction plots, the `bayesianMdl` and `ashaMdl` models perform similarly well on the test set.

For each model, use box plots to compare the distribution of predicted and true sale prices by borough. Create the box plots by using the `boxchart` function. Each box plot displays the median, the lower and upper quartiles, any outliers (computed using the interquartile range), and the minimum and maximum values that are not outliers. In particular, the line inside each box is the sample median, and the circular markers indicate outliers.

For each borough, compare the red box plot (showing the distribution of predicted prices) to the blue box plot (showing the distribution of true prices). Similar distributions for the predicted and true sale prices indicate good predictions. Use a 1-by-2 tiled layout to compare the results for the two models.

```tiledlayout(1,2) nexttile boxchart(testData.borough,testData.saleprice) hold on boxchart(testData.borough,bayesianTestPredictions) hold off legend(["True Sale Prices","Predicted Sale Prices"]) xlabel("Borough") ylabel(["Sale Price","(log transformed)"]) title("Bayesian Optimization Model") nexttile boxchart(testData.borough,testData.saleprice) hold on boxchart(testData.borough,ashaTestPredictions) hold off legend(["True Sale Prices","Predicted Sale Prices"]) xlabel("Borough") ylabel(["Sale Price","(log transformed)"]) title("ASHA Optimization Model")```

For both models, the predicted median sale price closely matches the median true sale price in each borough. The predicted sale prices seem to vary less than the true sale prices.

For each model, display box charts that compare the distribution of predicted and true sale prices by the number of families in a dwelling. Use a 1-by-2 tiled layout to compare the results for the two models.

```tiledlayout(1,2) nexttile boxchart(testData.buildingclasscategory,testData.saleprice) hold on boxchart(testData.buildingclasscategory,bayesianTestPredictions) hold off legend(["True Sale Prices","Predicted Sale Prices"]) xlabel("Number of Families in Dwelling") ylabel(["Sale Price","(log transformed)"]) title("Bayesian Optimization Model") nexttile boxchart(testData.buildingclasscategory,testData.saleprice) hold on boxchart(testData.buildingclasscategory,ashaTestPredictions) hold off legend(["True Sale Prices","Predicted Sale Prices"]) xlabel("Number of Families in Dwelling") ylabel(["Sale Price","(log transformed)"]) title("ASHA Optimization Model")```

For both models, the predicted median sale price closely matches the median true sale price in each type of dwelling. The predicted sale prices seem to vary less than the true sale prices.

For each model, plot a histogram of the test set residuals, and check that they are normally distributed. (Recall that the sale prices are log transformed.) Use a 1-by-2 tiled layout to compare the results for the two models.

```bayesianTestResiduals = testData.saleprice - bayesianTestPredictions; ashaTestResiduals = testData.saleprice - ashaTestPredictions; tiledlayout(1,2) nexttile histogram(bayesianTestResiduals) title("Test Set Residuals (Bayesian)") nexttile histogram(ashaTestResiduals) title("Test Set Residuals (ASHA)")```

Although the histograms are slightly left-skewed, they are both approximately symmetric about 0.