Contenuto principale

Model Evaluation Window

About the Model Evaluation Window

The Model Evaluation Window is intended to help you evaluate the fit of your model. You can evaluate against the fit data, validation data, other data, or without reference to any data. The window displays some of the same views you see in the Plots and Statistics for Comparing Models. The views available depend on what kind of model you are evaluating and the evaluation mode you choose.

You can access the Model Evaluation window via the menu items under Model > Evaluate from any of the modeling nodes: the one-stage or two-stage model node, local model node, response feature nodes, or any child model nodes. You can use validation data with any model except response features.

The Model Selection window allows you to compare different models with each other and with the data used to create these models. The Model Evaluation window also allows you either to examine a model without data or to validate it against data other than that used in creating the model. For any model node, model evaluation is a quick way to examine the model in more ways than those available in the main Browser views. For example, local models with more than one input factor can only be viewed in the Predicted/Observed view in the main Browser, and the Model Selection window only shows you the two-stage model, so you go to Model Evaluation to view the local model itself in more detail. For other models, such as childless response feature nodes or their child nodes, the Model Selection window is not available, so Model Evaluation is the way to view these models in detail.

There are four modes for evaluation, determined by your selection from the Model > Evaluate sub menu:

  • Fit data (or the hot key Ctrl+E) — The evaluation window appears, and the data shown along with the model surface is the data that was used to create this model. Summary Statistics are shown in the model list. The views available are

    • Residuals

    • Response surface

    • Cross section

    • Operating points — Two-stage models only

    • Predicted/observed — One-stage or response feature only

  • Validation data — The evaluation window appears (titled Model Validation). You can compare the model with the validation data. This option is only available if you have attached validation data to the test plan. See Using Validation Data. You can only use validation data with two-stage (response), local, and one-stage models (not response features). The views available are

    • Residuals

    • Response surface

    • Cross section

    • Operating points — two-stage models only. The local fit is not shown, as the local model was fitted to different data.

    Validation RMSE appears in the model list for comparison with the Fit RMSE.

  • No data — The evaluation window appears with only the model surface view and the cross-section view. These do not show the model fit against data. You can investigate the shape of the model surface.

  • Other data — Opens the Select Data for Evaluation wizard, where you can choose the data that is shown along with the model views. The steps are the same as selecting validation data to the test plan. Select a data set, match signals if necessary (only if signal names do not match), and select operating points you want to use (the default includes all operating points). See Using Validation Data. When you click Finish, the evaluation window then appears with the same views shown for validation data.

For more information about each view, see Plots and Statistics for Comparing Models.

Using Validation Data

These statistics help you select a model that makes reasonable predictions at the data points and the regions between the data points. To validate your model, collect additional validation data. Then use your model to measure how well the model predicts that validation data. Comparing a validation RMSE with the RMSE based on the modeling data is a good model selection statistic. Use the Model Evaluation window to validate models against other data. You can use validation data throughout a test plan.

Attach validation data to your test plan, then use it to validate your models. Validation RMSE appears in the statistics tables, you can view plots of validation residuals, and you can open the Model Evaluation window to investigate your models with validation data.

To attach a data set to your test plan for validation:

  1. At the test plan level, select TestPlan > Validation Data. The Select Data for Validation wizard appears.

    Dialog box titled ‘Select Data for Validation’ showing dataset options and details for a file named VVT.xls, including record count, variable count, test count, and date.

  2. Select a data set, and click Next.

  3. If the input factors and responses required to evaluate the model do not appear in the selected data set, the next screen allows you to match signal names in the selected data set to those in the model. If signal names match you next see the screen to select operating points.

    Dialog box titled ‘Select Data for Validation’ showing lists of unselected and selected tests, with test information table displaying variable names and values for Test number 1.

    Choose the operating points from this data set to use. By default all operating points are selected. For the currently selected operating point, the mean operating point values of all variables in this data set are displayed on the right.

  4. Click Finish to use the selected operating points to validate models in this test plan.

The validation data set appears in the Data Set information pane for the test plan. Validation RMSE is automatically added to the summary statistics for comparison in the bottom list view of response models in the test plan.

You can now use the validation data to validate all models except response features. You can see validation statistics in the following places:

  • Model List — Validation RMSE appears in the summary statistics in the lower list of models at the test plan, response and one-stage nodes

  • At the local node view:

    • Pooled Statistics — Validation RMSE — The root mean squared error between the two-stage model and the validation data for all operating points

    • Diagnostic Statistics > Local Diagnostics — Local model Validation RMSE for the currently selected operating point (if validation data is available for the current operating point—global variables must match)

    • Diagnostic Statistics > Summary Table — Validation RMSE for the current operating point (if available) appears for local multiple models

  • Summary Table — Validation RMSE for one-stage models

You can view validation plots in the following places:

  • Plots of Validation residuals — For local and one-stage models in the Model Browser

  • From any model node except response features, you can select Model > Evaluate > Validation Data to open the Model Evaluation window and investigate the model with the selected validation data.

  • Similarly you can use the Model Evaluation window to investigate your models with other data, by using the Model > Evaluate > Other Data menu choice from a modeling node. The steps required are the same: select a data set, match signal names if necessary, and select operating points to use.