Repeated fmincon optimization of slightly different objective functions
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
Hi!
I have a (log-likelihood) function f(x,data), which I optimize repeatedly w.r.t x using fmincon. After each optimization one datapoint is added and I use the previous solution as the starting value for the next optimization. There are many datapoints so the optimization problem and solution only change marginally. However the optimizer diverges each time only to converge back to a solution close to the starting point. This is very time consuming. I cannot provide an analytical gradient or hessian.
How could I speed up the procedure? I thought about passing in an initial gradient and hessian, but all I could find is an option to pass in an analytical gradient and hessian.
Thank you
Rosi
0 Commenti
Risposte (1)
Alan Weiss
il 6 Mag 2016
I am not sure why you are using the procedure that you describe. Do you need the intermediate solutions after each set of data points, or are you simply trying to guide fmincon to the global solution?
If you are simply guiding fmincon, then I suggest that you try giving fmincon all the data points at the start, and instead start fmincon from a variety of initial points to search for a global solution.
If you need the intermediate solutions, then I do not have any ideas. Sorry.
Alan Weiss
MATLAB mathematical toolbox documentation
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!