nlarx model compare and predict (horizon kept 1) fit totally differs

2 visualizzazioni (ultimi 30 giorni)
Hello,
I am using system identification toolbox. Recently, I am facing following issues
1) When i identify my data with nlarx , the fit of the response for the identified model (model1) with focus simulation is 42% . Whereas, for the model (model2) identified with focus prediction is 99%. (i.e. to much difference)
2)When I plot the reponses of both the models i.e. model1 and model2 by using prediction and simulation(compare) commands, the plot best fit the data is for 'predict'.
I mean to say that even the model identified with focus simulation, gives best fit plot for the command predict. (as far I now shouldnt it be giving good fit for simulation command. As focus was set to simulation in this particular case)
My question is that is it normal for some data to behave like this or I am missing something and how can I deal with this?
Further, If it is normal, can we consider that model not appropriate to use as the fit for prediction and simulation vary alot?
Looking forward to answer
Regards
  1 Commento
Shafaq Gul
Shafaq Gul il 10 Lug 2020
Thankyou for such a quick response. The explanation really helped me alot and opened my vision. Have a nice weekend!

Accedi per commentare.

Risposta accettata

Rajiv Singh
Rajiv Singh il 10 Lug 2020
Modificato: Rajiv Singh il 10 Lug 2020
The difference between (finite-horizon) prediction and simulation is a fundamental concept, something you could read books/articles on and build a solid understanding. For example, please see:
A way to think of this difference is this: given knowledge of weather during the past 5 days, can you predict what the weather is going to be tomorrow? Probably yes, often with high certainty. How about 5 days from now? How about 2 months into future? You can convince yourself that predictions for greater time horizons are going to be more difficult. This difficutly translates into these facts:
  1. Typically for any dynamic model prediction uncertainty grows with prediction horizon.
  2. Conversely, even a bad model can do a reasonable job with short-horizon predictions. For example, the trivial model y(t) = y(t-1) might work about 50% of the time for forecasting whether it will rain tomorrow or not. With increase in sampling frequency, the 1-step prediction will get even better, think about predicting the weather only a minute into future. So 1-step predictive ability is rarely a good test of model quality.
  3. For larger duration predictions, you will need a more sophisticated model, for example, a model that attempts to emulate the physics of the process. Simulation is infinite-horizon prediction, the most difficult of the lot.
In System Identification Toolbox, the default horizon for "estimation" (training) is often "prediction", which minimizes 1-step ahead prediction errors to determine the model parameters.This is because the theory says that this focus has the best chance of giving unbiased parameter estimates since it captures, and compensates for, the disturbance profile the best. However, to figure out if the estimated model is any good, you should "simulate" it (Inf-horizon prediction) using input from a test dataset. Hence the default horizon in the validation command COMPARE is Inf.
In Neural networks world, the difference between prediction and simulation can be seen as the difference between feed-forward (open-loop) and recurrent (closed-loop) networks. The former are easier to train, but you should not let small training errors lead you to believe that you have a good model. Instead, simulate a closed-loop (see, e.g., CLOSELOOP command in Deep Learning Toolbox) version of the trained model to see its effectiveness if capturing long-horizon predictions.
Disclaimer: all this description is a simplification subject to many assumptions about model structure, goal of modeling and nature of noise. But on the whole I think it is important to be aware of this crucial difference.

Più risposte (0)

Categorie

Scopri di più su Linear Model Identification in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by