Optimising a multivariable-multiobjective function with known the best objective values
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
SM
il 25 Giu 2021
Commentato: Walter Roberson
il 26 Giu 2021
Hello
I have a displacement error function with two coefficients as input ( coefficients are the parameters that need to be optimised), and the output of the function (cost function) is a vector including six numbers showing the error of the function with coefficient inputs.
[y1 y2 y3 y4 y5 y6 ]=cost (x1,x2)
The ideal coefficient is the coefficient that yields the cost function value of zero (it means that the best combination of x1 and x2 should give y1=y2=y3=y4=y5=y6=0); however,, assigning various coefficient (variables), the output may become positive or negative.
I am currently running to the problem that the optimisation loop is trying to minimise the cost function by tailoring to more negative values, which is wrong. It should try to push the cost function to a value of zero.
I cannot use absolute values since it is physically wrong, and the results won't be correct values.
Is there any way to assign the objective value of zero to the function and make Matlab push the cost function to zero?
I am currently using gamultiobj function.
Thanks
0 Commenti
Risposta accettata
Walter Roberson
il 26 Giu 2021
square the function value. Provided it is real valued, the minimum of the square would be 0
3 Commenti
Walter Roberson
il 26 Giu 2021
Modificato: Walter Roberson
il 26 Giu 2021
You assume that at each time, the minimizer is only examining two points.
That isn't even the case for bisection-type zero finders, as those use two current points to find a third point and the values of the three points determines where to look next.
Simplex style minimizers use (N+1) points for N variables, so with two variables they would use three points.
For gradient estimates by Forward Differences (not considered as accurate), N additional points need to be evaluated for N variables (so N+1 points are used to figure out where to go next)
For Central Differences for gradient estimation, 2*N extra evaluations are needed.
Walter Roberson
il 26 Giu 2021
Think more about your hypothesis, that minimization cannot happen because there would be two points with the same value and minimization cannot decide what to do from that.
Think about the definition of "minima". A point x is a "minima" if it is a "closed compact set" such that for all continously adjacent points x', f(x') >= f(x) . Suppose x is hypothesized to be a minima. Sample at nearby points x1 and x2. Suppose f(x) < f(x1) and f(x) < f(x2) . Is f(x) in fact a minima? Well, compare f(x1) to f(x2) and take the lesser of them. f(x) < f(x1) < f(x2) . By Mean Value Theorem and continuity assumptions, and assuming dimension 2 or more this implies that there is a third point x3 with value equal to f(x1), f(x) < f(x1), f(x) < f(x3), f(x1) = f(x3) and x1 ~= x3 . This is inherent in the definition of "minima": there will always be such points of equal value that are not the same point (provided the the dimensionality is > 1 -- if it is 1 then the points might be unique.)
Under your hypothesis, that would "prove" that it is not possible to find the minima of a function of 2 or more variables. Which just isn't the case.
You would be on safer ground if you were to hypothesize that in order to reliably find a minima of a multinomial (that is, a polynomial extended to multiple variables), that there is an "effective degree" that could be calculated by looking at the combination of coefficient powers, and that you might need to sample at least one more point than that degree . But that is for global minimization, not local minimization, and talks about the total number of points to be evaluated at, not the number of points to be involved in an iteration.
Più risposte (0)
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!