Which optimization method is better for problems with random steps
12 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Tomás Romero Pietrafesa
il 21 Dic 2022
Risposto: Alan Weiss
il 22 Dic 2022
Hi! I'm working on the parameter optimization of a function that has random steps inside it. The function is something like this
Error=fun(x(1),x(2),x(3))
The thing is that almost all steps of my function use random numbers (and have random steps) and I can obtain different values of "Error" for the same values of x(1),x(2) & x(3). These different values are constrained between a deviation, i.e for a specific input the Error value will be between 0.6 and 0.8.
So when I go to the MATLAB Optimize Live Editor I'm not sure about which solver should I use since my function doesn't seem to fit any of the categories, given its random nature. May be there is another optimization method/solver designed for these kind of problems? If not which one would you recommend me to use?
Thank you in advance! I hope I made myself clear
0 Commenti
Risposta accettata
John D'Errico
il 21 Dic 2022
Modificato: John D'Errico
il 21 Dic 2022
No optimizer is good in this case.
That is, all of the classical optimizers ABSOLUTELY assume that your objective is a well defined function, in the sense that if you pass it the same set of values twice in a row, you would get the same result back. This is terribly important in how they are written.
Depending on the optimizer, some require different degrees of differentiability, smoothness. The optimizers like fmincon, fminunc, etc., assume that your function is everywhere differentiable. Without that, they can fail. Other optimizers, fminsearch, for example, can tolerate failures of differentiability, but they still can easily get lost on a problem that is not differentiable. Others are more extreme yet, for example, GA, for example, can even handle discontinuous problems. But even there, GA does not worry about noise in the objective function itself, presuming that it does not exist. That is, GA presumes your objective function is deterministic, that there is no randomness in what it is given.
In fact, pretty much everything would assume that if you look back at a point already checked, that the function value does not change. This now gets into the realms of stochastic optimization and response surface optimization.
I would note that the phrase "stochastic optimization" actually includes two classes of probem, one where the search scheme is a randomly generated one, but the objective function is deterministic (Tools like genetic algorithms, simulated annealing, etc. are typically in this class. So they are random in how they move, but they presume a deterministic objective.) The other side of stochastic optimization involves the optimization of a non-deterministic objective.
Your problem is of the latter form. So take care in what you read. You might want to concentrate on tools from response surface methodology, as they are more targeted at your problem.
This field uses statistical modeling of the objective, then using that estimated model to solve for the optimum value. You can then repeat this process, with a new experiment centered around the point chosen from the previous iteration, thus making an iterative scheme which will (hopefully) converge to the solution you wish to see. Are there any tools in MATLAB which do this process directly and do all the work for you? Not really that I know about. But you can use the stats toolbox to formulate such an iteration. That is, start with a set of points. Estimate a low order polynomial model, typically a quadratic one. Solve for the desired minimum or maximum as you need, then repeat the process, centered around that point. A problem in such an algorithm is that it could fail, IF the locally quadratic model I just mentioned was in fact hyperbolic in nature. Then there would be no minimum or maximum. In that case, you would need to start thinking about things like trust regions. The process could get complicated. (But it might actually be a fun code to write...)
Più risposte (2)
Torsten
il 21 Dic 2022
This seems to be a stochastic optimization problem. None of the optimizers from the optimization toolbox can cope with random outputs from the objective function. Since I don't know what your problem is about, I cannot give further advice.
Alan Weiss
il 22 Dic 2022
In addition to what the other answerers have described, there does exist an optimization solver that can deal with stochastic objective functions: bayesopt in Statistics and Machine Learning Toolbox. This is the only solver that I am aware of that assumes that the objective function gives a stochastic (nondeterministic) response.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation
0 Commenti
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!