# Effect of Automatic Differentiation in Problem-Based Optimization

When using automatic differentiation, the problem-based solve function generally requires fewer function evaluations and can operate more robustly.

By default, solve uses automatic differentiation to evaluate the gradients of objective and nonlinear constraint functions, when applicable. Automatic differentiation applies to functions that are expressed in terms of operations on optimization variables without using the fcn2optimexpr function. See Automatic Differentiation in Optimization Toolbox and Convert Nonlinear Function to Optimization Expression.

### Minimization Problem

Consider the problem of minimizing the following objective function:

$\begin{array}{l}fun1=100{\left({x}_{2}-{x}_{1}^{2}\right)}^{2}+{\left(1-{x}_{1}\right)}^{2}\\ fun2=\mathrm{exp}\left(-\sum {\left({x}_{i}-{y}_{i}\right)}^{2}\right)\mathrm{exp}\left(-\mathrm{exp}\left(-{y}_{1}\right)\right)sech\left({y}_{2}\right)\\ objective=fun1-\phantom{\rule{0.5em}{0ex}}fun2.\end{array}$

Create an optimization problem representing these variables and the objective function expression.

prob = optimproblem;
x = optimvar('x',2);
y = optimvar('y',2);
fun1 = 100*(x(2) - x(1)^2)^2 + (1 - x(1))^2;
fun2 = exp(-sum((x - y).^2))*exp(-exp(-y(1)))*sech(y(2));
prob.Objective = fun1 - fun2;

The minimization is subject to the nonlinear constraint ${x}_{1}^{2}+{x}_{2}^{2}+{y}_{1}^{2}+{y}_{2}^{2}\le 4$.

prob.Constraints.cons = sum(x.^2 + y.^2) <= 4;

### Solve Problem and Examine Solution Process

Solve the problem starting from an initial point.

init.x = [-1;2];
init.y = [1;-1];
[xproblem,fvalproblem,exitflagproblem,outputproblem] = solve(prob,init);
Solving problem using fmincon.

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
disp(fvalproblem)
-0.5500
disp(outputproblem.funcCount)
77
disp(outputproblem.iterations)
46

The output structure shows that solve calls fmincon, which requires 77 function evaluations and 46 iterations to solve the problem. The objective function value at the solution is fvalproblem = -0.55.

### Solve Problem Without Automatic Differentiation

To determine the efficiency gains from automatic differentiation, set solve name-value pair arguments to use finite difference gradients instead.

[xfd,fvalfd,exitflagfd,outputfd] = solve(prob,init,...
"ObjectiveDerivative",'finite-differences',"ConstraintDerivative",'finite-differences');
Solving problem using fmincon.

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
disp(fvalfd)
-0.5500
disp(outputfd.funcCount)
264
disp(outputfd.iterations)
46

Using a finite difference gradient approximation causes solve to take 269 function evaluations compared to 77. The number of iterations is nearly the same, as is the reported objective function value at the solution. The final solution points are the same to display precision.

disp([xproblem.x,xproblem.y])
0.8671    1.0433
0.7505    0.5140
disp([xfd.x,xfd.y])
0.8671    1.0433
0.7505    0.5140

In summary, the main effect of automatic differentiation in optimization is to lower the number of function evaluations.