Is it possible to (non-linear) minimize x^y by choosing both x and y?

2 visualizzazioni (ultimi 30 giorni)
Hi there,
While working on a non-linear optimization problem with Matlab, I encountered errors that I cannot debug by myself, and I even doubt if Matlab can solve this optimization... I have the following code:
%%%%%%%%%%%%%%%%%%%%%%%%CODE BELOW
x = optimvar('x');
y = optimvar('y');
prob = optimproblem( "Objective", x ^ (sqrt(y)));
prob.Constraints.con1 = x >=1.01;
prob.Constraints.con2 = y >=1.01;
prob.Constraints.con3 = y <=2;
prob.Constraints.con4 = x ^ y >=100;
prob.Constraints.con4 = x <= 100;
x0.x = 1.01;
x0.y = 1.01;
sol = solve(prob,x0)
%%%%%%%%%%%%%%%%%%%%%%%%%%CODE END
Meanwhile, the error message is as follows:
%%%%%%
"Error using optim.internal.problemdef.operator.PowerOperator
Exponent must be a finite real numeric scalar.
Error in optim.internal.problemdef.Power
Error in .^
Error in ^
Error in optimization_solver (line 4)
prob = optimproblem( "Objective", x ^ (sqrt(y)));"
%%%%%%
Sincere appreciations to everyone modifying my codes to get the problem solved, or providing any answers.

Risposta accettata

John D'Errico
John D'Errico il 26 Lug 2022
Modificato: John D'Errico il 26 Lug 2022
Can MATLAB solve it? Yes. Your doubt is only you not knowing how to solve the problem.
In fact, simplest is to transform the problem. Your goal is to solve the problem:
minimize x^sqrt(y)
subject to the constraints
1.01 <= x <= 100
1.01 <= y <= 2
x^y >= 100
I'll solve the problem by use of a simple constraint. Remember that the log is a monotonic transform. So if minimize something, then I will still minimize it if I take the log, and take the mimimum of that.
My variation of you problem uses the transformation u = log(x). Assume that log here means the natural log, what MATLAB gives you in the log function. First, recognize these two identities
log(x^sqrt(y)) = log(x) * sqrt(y)
and
log(x^y) = log(x)*y
Do you see this works nicely? So my version of the problem is much simpler:
minimize u*sqrt(y)
subject to the constraints
log(1.01) <= u <= log(100)
1.01 <= y <= 2
u*y >= log(100)
I have a funny feeling I can set that up in a symbolic form, solving it with Lagrange multipliers on paper. But we can use a numerical solver as you did:
u = optimvar('u');
y = optimvar('y');
prob = optimproblem( "Objective", u*sqrt(y));
prob.Constraints.con1 = u >= log(1.01);
prob.Constraints.con2 = y >= 1.01;
prob.Constraints.con3 = y <= 2;
prob.Constraints.con4 = u*y >= log(100);
I see at this point that you defined prob.Constraints.con4 TWICE. That surely caused a problem in your solve.
prob.Constraints.con5 = u <= log(100);
V0.u = 2;
V0.y = 1.5;
sol = solve(prob,V0)
Solving problem using fmincon. Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol = struct with fields:
u: 2.3026 y: 2.0000
x = exp(sol.u)
x = 10.0000
y = sol.y
y = 2.0000
SURPRISE! It worked. As I suggested above, your problem was most likely in the definition of con4. Regardless, the use of logs made the problem a bit simpler to solve. The use of powers often creates numerical problems. Things get nasty really fast then. But most important was probably to just write more careful code.
  4 Commenti
Torsten
Torsten il 26 Lug 2022
I thought you might have more experience with the problem-based approach in optimization problems. I just cannot understand that the problem would be solved by "fmincon", but obviously, the translation is different from the one I used.
Zheng Kang
Zheng Kang il 30 Lug 2022
Wow that code is amazing thank you so much!!! Yes it is easy to solve by Lagranging on this problem, but I am actually working on a more complicated one but just getting stucked by the powers during debugging... Your code helped me super a lot! My sincere appreciations.

Accedi per commentare.

Più risposte (3)

Torsten
Torsten il 26 Lug 2022
fun = @(x) x(1)^sqrt(x(2));
lb = [1.01,1.01];
ub = [100,2];
x0 = [1.01,1.01];
sol = fmincon(fun,x0,[],[],[],[],lb,ub,@nonlcon)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol = 1×2
10.0000 2.0000
fun(sol)
ans = 25.9546
function [c,ceq] = nonlcon(x)
c = -x(1)^x(2) + 100;
ceq = [];
end

Matt J
Matt J il 26 Lug 2022
Modificato: Matt J il 26 Lug 2022
w = optimvar('w','Lower',log(1.01),'Upper',log(100)); %w=log(x)
z= optimvar('z','Lower',sqrt(1.01),'Upper',sqrt(2)); %z=sqrt(y)
prob = optimproblem( "Objective", w * z);
prob.Constraints.con4 = w * z.^2 >=log(100);
x0.w = log(1.01);
x0.z = sqrt(1.01);
sol = solve(prob,x0)
Solving problem using fmincon. Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol = struct with fields:
w: 2.3026 z: 1.4142
x=exp(sol.w)
x = 10.0000
y=sol.z^2
y = 2.0000

Matt J
Matt J il 26 Lug 2022
Why does the problem-based approach not "translate" the problem "correctly" in this case ?...I just cannot understand that the problem would be solved by "fmincon", but obviously, the translation is different from the one I used.
I suspect that the error message is to be taken at face value: the problem-based solver does not know how to symbolically parse exponents that are OptimizationExpression objects. Using fcn2optimexpr seems to work around it, though,
x = optimvar('x');
y = optimvar('y');
fcn=@(A,B) fcn2optimexpr(@(a,b)a.^b, A,B);
prob = optimproblem( "Objective", fcn( x,sqrt(y) ) );
prob.Constraints.con1 = x >=1.01;
prob.Constraints.con2 = y >=1.01;
prob.Constraints.con3 = y <=2;
prob.Constraints.con4 = fcn(x,y) >=100;
prob.Constraints.con5 = x <= 100;
x0.x = 1.01;
x0.y = 1.01;
sol = solve(prob,x0)
Solving problem using fmincon. Feasible point with lower objective function value found. Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol = struct with fields:
x: 10.0000 y: 2.0000
  1 Commento
Torsten
Torsten il 26 Lug 2022
Modificato: Torsten il 26 Lug 2022
Thank you, Matt.
You are right, seems to be a problem with parsing the power operator.
Although it is claimed that pointwise power is supported. But maybe it's only x^constant.
x = optimvar('x');
y = optimvar('y');
prob = optimproblem( "Objective", exp(sqrt(y)*log(x)));
prob.Constraints.con1 = x >=1.01;
prob.Constraints.con2 = y >=1.01;
prob.Constraints.con3 = y <=2;
prob.Constraints.con4 = exp(y*log(x))>=100;
prob.Constraints.con5 = x <= 100;
x0.x = 1.01;
x0.y = 1.01;
sol = solve(prob,x0)
Solving problem using fmincon. Feasible point with lower objective function value found. Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
sol = struct with fields:
x: 10.0000 y: 2.0000

Accedi per commentare.

Categorie

Scopri di più su Problem-Based Optimization Setup in Help Center e File Exchange

Prodotti


Release

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by