Main Content

quadprog

Quadratic programming

Description

Solver for quadratic objective functions with linear constraints.

quadprog finds a minimum for a problem specified by

minx12xTHx+fTx such that {Axb,Aeqx=beq,lbxub.

H, A, and Aeq are matrices, and f, b, beq, lb, ub, and x are vectors.

You can pass f, lb, and ub as vectors or matrices; see Matrix Arguments.

Note

quadprog applies only to the solver-based approach. For a discussion of the two optimization approaches, see First Choose Problem-Based or Solver-Based Approach.

x = quadprog(H,f) returns a vector x that minimizes 1/2*x'*H*x + f'*x. The input H must be positive definite for the problem to have a finite minimum. If H is positive definite, then the solution x = H\(-f).

x = quadprog(H,f,A,b) minimizes 1/2*x'*H*x + f'*x subject to the restrictions A*x  b. The input A is a matrix of doubles, and b is a vector of doubles.

example

x = quadprog(H,f,A,b,Aeq,beq) solves the preceding problem subject to the additional restrictions Aeq*x = beq. Aeq is a matrix of doubles, and beq is a vector of doubles. If no inequalities exist, set A = [] and b = [].

example

x = quadprog(H,f,A,b,Aeq,beq,lb,ub) solves the preceding problem subject to the additional restrictions lb  x  ub. The inputs lb and ub are vectors of doubles, and the restrictions hold for each x component. If no equalities exist, set Aeq = [] and beq = [].

Note

If the specified input bounds for a problem are inconsistent, the output x is x0 and the output fval is [].

quadprog resets components of x0 that violate the bounds lb  x  ub to the interior of the box defined by the bounds. quadprog does not change components that respect the bounds.

example

x = quadprog(H,f,A,b,Aeq,beq,lb,ub,x0) solves the preceding problem starting from the vector x0. If no bounds exist, set lb = [] and ub = []. Some quadprog algorithms ignore x0; see x0.

Note

x0 is a required argument for the 'active-set' algorithm.

x = quadprog(H,f,A,b,Aeq,beq,lb,ub,x0,options) solves the preceding problem using the optimization options specified in options. Use optimoptions to create options. If you do not want to give an initial point, set x0 = [].

example

x = quadprog(problem) returns the minimum for problem, a structure described in problem. Create the problem structure using dot notation or the struct function. Alternatively, create a problem structure from an OptimizationProblem object by using prob2struct.

example

[x,fval] = quadprog(___), for any input variables, also returns fval, the value of the objective function at x:

fval = 0.5*x'*H*x + f'*x

example

[x,fval,exitflag,output] = quadprog(___) also returns exitflag, an integer that describes the exit condition of quadprog, and output, a structure that contains information about the optimization.

example

[x,fval,exitflag,output,lambda] = quadprog(___) also returns lambda, a structure whose fields contain the Lagrange multipliers at the solution x.

example

[wsout,fval,exitflag,output,lambda] = quadprog(H,f,A,b,Aeq,beq,lb,ub,ws) starts quadprog from the data in the warm start object ws, using the options in ws. The returned argument wsout contains the solution point in wsout.X. By using wsout as the initial warm start object in a subsequent solver call, quadprog can work faster.

example

Examples

collapse all

Find the minimum of

f(x)=12x12+x22-x1x2-2x1-6x2

subject to the constraints

x1+x22-x1+2x222x1+x23.

In quadprog syntax, this problem is to minimize

f(x)=12xTHx+fTx,

where

H=[1-1-12]f=[-2-6],

subject to the linear constraints.

To solve this problem, first enter the coefficient matrices.

H = [1 -1; -1 2]; 
f = [-2; -6];
A = [1 1; -1 2; 2 1];
b = [2; 2; 3];

Call quadprog.

[x,fval,exitflag,output,lambda] = ...
   quadprog(H,f,A,b);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

Examine the final point, function value, and exit flag.

x,fval,exitflag
x = 2×1

    0.6667
    1.3333

fval = 
-8.2222
exitflag = 
1

An exit flag of 1 means the result is a local minimum. Because H is a positive definite matrix, this problem is convex, so the minimum is a global minimum.

Confirm that H is positive definite by checking its eigenvalues.

eig(H)
ans = 2×1

    0.3820
    2.6180

Find the minimum of

f(x)=12x12+x22-x1x2-2x1-6x2

subject to the constraint

x1+x2=0.

In quadprog syntax, this problem is to minimize

f(x)=12xTHx+fTx,

where

H=[1-1-12]f=[-2-6],

subject to the linear constraint.

To solve this problem, first enter the coefficient matrices.

H = [1 -1; -1 2]; 
f = [-2; -6];
Aeq = [1 1];
beq = 0;

Call quadprog, entering [] for the inputs A and b.

[x,fval,exitflag,output,lambda] = ...
   quadprog(H,f,[],[],Aeq,beq);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

Examine the final point, function value, and exit flag.

x,fval,exitflag
x = 2×1

   -0.8000
    0.8000

fval = 
-1.6000
exitflag = 
1

An exit flag of 1 means the result is a local minimum. Because H is a positive definite matrix, this problem is convex, so the minimum is a global minimum.

Confirm that H is positive definite by checking its eigenvalues.

eig(H)
ans = 2×1

    0.3820
    2.6180

Find the x that minimizes the quadratic expression

12xTHx+fTx

where

H=[1-11-12-21-24], f=[2-31],

subject to the constraints

0x1, x=1/2.

To solve this problem, first enter the coefficients.

H = [1,-1,1
    -1,2,-2
    1,-2,4];
f = [2;-3;1];
lb = zeros(3,1);
ub = ones(size(lb));
Aeq = ones(1,3);
beq = 1/2;

Call quadprog, entering [] for the inputs A and b.

x = quadprog(H,f,[],[],Aeq,beq,lb,ub)
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1

    0.0000
    0.5000
    0.0000

Set options to monitor the progress of quadprog.

options = optimoptions('quadprog','Display','iter');

Define a problem with a quadratic objective and linear inequality constraints.

H = [1 -1; -1 2]; 
f = [-2; -6];
A = [1 1; -1 2; 2 1];
b = [2; 2; 3];

To help write the quadprog function call, set the unnecessary inputs to [].

Aeq = [];
beq = [];
lb = [];
ub = [];
x0 = [];

Call quadprog to solve the problem.

x = quadprog(H,f,A,b,Aeq,beq,lb,ub,x0,options)
 Iter            Fval  Primal Infeas    Dual Infeas  Complementarity  
    0   -8.884885e+00   3.214286e+00   1.071429e-01     1.000000e+00  
    1   -8.331868e+00   1.321041e-01   4.403472e-03     1.910489e-01  
    2   -8.212804e+00   1.676295e-03   5.587652e-05     1.009601e-02  
    3   -8.222204e+00   8.381476e-07   2.793826e-08     1.809485e-05  
    4   -8.222222e+00   3.064216e-14   9.992007e-16     7.525977e-13  

Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 2×1

    0.6667
    1.3333

Create a problem structure using a Problem-Based Optimization Workflow. Create an optimization problem equivalent to Quadratic Program with Linear Constraints.

x = optimvar('x',2);
objec = x(1)^2/2 + x(2)^2 - x(1)*x(2) - 2*x(1) - 6*x(2);
prob = optimproblem('Objective',objec);
prob.Constraints.cons1 = sum(x) <= 2;
prob.Constraints.cons2 = -x(1) + 2*x(2) <= 2;
prob.Constraints.cons3 = 2*x(1) + x(2) <= 3;

Convert prob to a problem structure.

problem = prob2struct(prob);

Solve the problem using quadprog.

[x,fval] = quadprog(problem)
Warning: Your Hessian is not symmetric. Resetting H=(H+H')/2.
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 2×1

    0.6667
    1.3333

fval = 
-8.2222

Solve a quadratic program and return both the solution and the objective function value.

H = [1,-1,1
    -1,2,-2
    1,-2,4];
f = [-7;-12;-15];
A = [1,1,1];
b = 3;
[x,fval] = quadprog(H,f,A,b)
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1

   -3.5714
    2.9286
    3.6429

fval = 
-47.1786

Check that the returned objective function value matches the value computed from the quadprog objective function definition.

fval2 = 1/2*x'*H*x + f'*x
fval2 = 
-47.1786

To see the optimization process for quadprog, set options to show an iterative display and return four outputs. The problem is to minimize

12xTHx+fTx

subject to

0x1,

where

H=[21-11312-1125], f=[4-712].

Enter the problem coefficients.

H = [2 1 -1
    1 3 1/2
    -1 1/2 5];
f = [4;-7;12];
lb = zeros(3,1);
ub = ones(3,1);

Set the options to display iterative progress of the solver.

options = optimoptions('quadprog','Display','iter');

Call quadprog with four outputs.

[x fval,exitflag,output] = quadprog(H,f,[],[],[],[],lb,ub,[],options)
 Iter            Fval  Primal Infeas    Dual Infeas  Complementarity  
    0    2.691769e+01   1.582123e+00   1.712849e+01     1.680447e+00  
    1   -3.889430e+00   0.000000e+00   8.564246e-03     9.971731e-01  
    2   -5.451769e+00   0.000000e+00   4.282123e-06     2.710131e-02  
    3   -5.499997e+00   0.000000e+00   1.221903e-10     6.939689e-07  
    4   -5.500000e+00   0.000000e+00   5.842173e-14     3.469847e-10  

Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 3×1

    0.0000
    1.0000
    0.0000

fval = 
-5.5000
exitflag = 
1
output = struct with fields:
            message: 'Minimum found that satisfies the constraints....'
          algorithm: 'interior-point-convex'
      firstorderopt: 1.5921e-09
    constrviolation: 0
         iterations: 4
       linearsolver: 'dense'
       cgiterations: []

Solve a quadratic programming problem and return the Lagrange multipliers.

H = [1,-1,1
    -1,2,-2
    1,-2,4];
f = [-7;-12;-15];
A = [1,1,1];
b = 3;
lb = zeros(3,1);
[x,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

Examine the Lagrange multiplier structure lambda.

disp(lambda)
    ineqlin: 12.0000
      eqlin: [0x1 double]
      lower: [3x1 double]
      upper: [3x1 double]

The linear inequality constraint has an associated Lagrange multiplier of 12.

Display the multipliers associated with the lower bound.

disp(lambda.lower)
    5.0000
    0.0000
    0.0000

Only the first component of lambda.lower has a nonzero multiplier. This generally means that only the first component of x is at the lower bound of zero. Confirm by displaying the components of x.

disp(x)
    0.0000
    1.5000
    1.5000

To speed subsequent quadprog calls, create a warm start object.

options = optimoptions('quadprog','Algorithm','active-set');
x0 = [1 2 3];
ws = optimwarmstart(x0,options);

Solve a quadratic program using ws.

H = [1,-1,1
    -1,2,-2
    1,-2,4];
f = [-7;-12;-15];
A = [1,1,1];
b = 3;
lb = zeros(3,1);
tic
[ws,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb,[],ws);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

<stopping criteria details>
toc
Elapsed time is 0.060411 seconds.

Change the objective function and solve the problem again.

f = [-10;-15;-20];

tic
[ws,fval,exitflag,output,lambda] = quadprog(H,f,A,b,[],[],lb,[],ws);
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

<stopping criteria details>
toc
Elapsed time is 0.010756 seconds.

Input Arguments

collapse all

Quadratic objective term, specified as a symmetric real matrix. H represents the quadratic in the expression 1/2*x'*H*x + f'*x. If H is not symmetric, quadprog issues a warning and uses the symmetrized version (H + H')/2 instead.

If the quadratic matrix H is sparse, then by default, the 'interior-point-convex' algorithm uses a slightly different algorithm than when H is dense. Generally, the sparse algorithm is faster on large, sparse problems, and the dense algorithm is faster on dense or small problems. For more information, see the LinearSolver option description and interior-point-convex quadprog Algorithm.

Example: [2,1;1,3]

Data Types: single | double

Linear objective term, specified as a real vector. f represents the linear term in the expression 1/2*x'*H*x + f'*x.

Example: [1;3;2]

Data Types: single | double

Linear inequality constraints, specified as a real matrix. A is an M-by-N matrix, where M is the number of inequalities, and N is the number of variables (number of elements in x0). For large problems, pass A as a sparse matrix.

A encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and b is a column vector with M elements.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30,

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: single | double

Linear inequality constraints, specified as a real vector. b is an M-element vector related to the A matrix. If you pass b as a row vector, solvers internally convert b to the column vector b(:). For large problems, pass b as a sparse vector.

b encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and A is a matrix of size M-by-N.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30.

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: single | double

Linear equality constraints, specified as a real matrix. Aeq is an Me-by-N matrix, where Me is the number of equalities, and N is the number of variables (number of elements in x0). For large problems, pass Aeq as a sparse matrix.

Aeq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and beq is a column vector with Me elements.

For example, consider these inequalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20,

Specify the inequalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: single | double

Linear equality constraints, specified as a real vector. beq is an Me-element vector related to the Aeq matrix. If you pass beq as a row vector, solvers internally convert beq to the column vector beq(:). For large problems, pass beq as a sparse vector.

beq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and Aeq is a matrix of size Me-by-N.

For example, consider these equalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20.

Specify the equalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: single | double

Lower bounds, specified as a real vector or real array. If the number of elements in x0 is equal to the number of elements in lb, then lb specifies that

x(i) >= lb(i) for all i.

If numel(lb) < numel(x0), then lb specifies that

x(i) >= lb(i) for 1 <= i <= numel(lb).

If lb has fewer elements than x0, solvers issue a warning.

Example: To specify that all x components are positive, use lb = zeros(size(x0)).

Data Types: single | double

Upper bounds, specified as a real vector or real array. If the number of elements in x0 is equal to the number of elements in ub, then ub specifies that

x(i) <= ub(i) for all i.

If numel(ub) < numel(x0), then ub specifies that

x(i) <= ub(i) for 1 <= i <= numel(ub).

If ub has fewer elements than x0, solvers issue a warning.

Example: To specify that all x components are less than 1, use ub = ones(size(x0)).

Data Types: single | double

Initial point, specified as a real vector. The length of x0 is the number of rows or columns of H.

x0 applies to the 'trust-region-reflective' algorithm when the problem has only bound constraints. x0 also applies to the 'active-set' algorithm.

Note

x0 is a required argument for the 'active-set' algorithm.

If you do not specify x0, quadprog sets all components of x0 to a point in the interior of the box defined by the bounds. quadprog ignores x0 for the 'interior-point-convex' algorithm and for the 'trust-region-reflective' algorithm with equality constraints.

Example: [1;2;1]

Data Types: single | double

Optimization options, specified as the output of optimoptions or a structure such as optimset returns.

Some options are absent from the optimoptions display. These options appear in italics in the following table. For details, see View Optimization Options.

All Algorithms

Algorithm

Choose the algorithm:

  • 'interior-point-convex' (default)

  • 'trust-region-reflective'

  • 'active-set'

The 'interior-point-convex' algorithm handles only convex problems. The 'trust-region-reflective' algorithm handles problems with only bounds or only linear equality constraints, but not both. The 'active-set' algorithm handles indefinite problems provided that the projection of H onto the nullspace of Aeq is positive semidefinite. For details, see Choosing the Algorithm.

Diagnostics

Display diagnostic information about the function to be minimized or solved. The choices are 'on' or 'off' (default).

Display

Level of display (see Iterative Display):

  • 'off' or 'none' displays no output.

  • 'final' displays only the final output (default).

The 'interior-point-convex' and 'active-set' algorithms allow additional values:

  • 'iter' specifies an iterative display.

  • 'iter-detailed' specifies an iterative display with a detailed exit message.

  • 'final-detailed' displays only the final output with a detailed exit message.

MaxIterations

Maximum number of iterations allowed; a nonnegative integer.

  • For a 'trust-region-reflective' equality-constrained problem, the default value is 2*(numberOfVariables – numberOfEqualities).

  • 'active-set' has a default of 10*(numberOfVariables + numberOfConstraints).

  • For all other algorithms and problems, the default value is 200.

For optimset, the option name is MaxIter. See Current and Legacy Option Names.

OptimalityTolerance

Termination tolerance on the first-order optimality; a nonnegative scalar.

  • For a 'trust-region-reflective' equality-constrained problem, the default value is 1e-6.

  • For a 'trust-region-reflective' bound-constrained problem, the default value is 100*eps, about 2.2204e-14.

  • For the 'interior-point-convex' and 'active-set' algorithms, the default value is 1e-8.

See Tolerances and Stopping Criteria.

For optimset, the option name is TolFun. See Current and Legacy Option Names.

StepTolerance

Termination tolerance on x; a nonnegative scalar.

  • For 'trust-region-reflective', the default value is 100*eps, about 2.2204e-14.

  • For 'interior-point-convex', the default value is 1e-12.

  • For 'active-set', the default value is 1e-8.

For optimset, the option name is TolX. See Current and Legacy Option Names.

'trust-region-reflective' Algorithm Only

FunctionTolerance

Termination tolerance on the function value; a nonnegative scalar. The default value depends on the problem type: bound-constrained problems use 100*eps, and linear equality-constrained problems use 1e-6. See Tolerances and Stopping Criteria.

For optimset, the option name is TolFun. See Current and Legacy Option Names.

HessianMultiplyFcn

Hessian multiply function, specified as a function handle. For large-scale structured problems, this function computes the Hessian matrix product H*Y without actually forming H. The function has the form

W = hmfun(Hinfo,Y)

where Hinfo (and potentially some additional parameters) contain the matrices used to compute H*Y.

See Quadratic Minimization with Dense, Structured Hessian for an example that uses this option.

For optimset, the option name is HessMult. See Current and Legacy Option Names.

MaxPCGIter

Maximum number of PCG (preconditioned conjugate gradient) iterations; a positive scalar. The default is max(1,floor(numberOfVariables/2)) for bound-constrained problems. For equality-constrained problems, quadprog ignores MaxPCGIter and uses MaxIterations to limit the number of PCG iterations. For more information, see Preconditioned Conjugate Gradient Method.

PrecondBandWidth

Upper bandwidth of the preconditioner for PCG; a nonnegative integer. By default, quadprog uses diagonal preconditioning (upper bandwidth 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. Setting PrecondBandWidth to Inf uses a direct factorization (Cholesky) rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step toward the solution.

SubproblemAlgorithm

Determines how the iteration step is calculated. The default, 'cg', takes a faster but less accurate step than 'factorization'. See trust-region-reflective quadprog Algorithm.

TolPCG

Termination tolerance on the PCG iteration; a positive scalar. The default is 0.1.

TypicalX

Typical x values. The number of elements in TypicalX equals the number of elements in x0, the starting point. The default value is ones(numberOfVariables,1). quadprog uses TypicalX internally for scaling. TypicalX has an effect only when x has unbounded components, and when a TypicalX value for an unbounded component exceeds 1.

'interior-point-convex' Algorithm Only

ConstraintTolerance

Tolerance on the constraint violation; a nonnegative scalar. The default is 1e-8.

For optimset, the option name is TolCon. See Current and Legacy Option Names.

LinearSolver

Type of internal linear solver in the algorithm:

  • 'auto' (default) — Use 'sparse' if the H matrix is sparse and 'dense' otherwise.

  • 'sparse' — Use sparse linear algebra. See Sparse Matrices.

  • 'dense' — Use dense linear algebra.

'active-set' Algorithm Only

ConstraintTolerance

Tolerance on the constraint violation; a nonnegative scalar. The default value is 1e-8.

For optimset, the option name is TolCon. See Current and Legacy Option Names.

ObjectiveLimit

A tolerance (stopping criterion) that is a scalar. If the objective function value goes below ObjectiveLimit and the current point is feasible, the iterations halt because the problem is unbounded, presumably. The default value is -1e20.

Single-Precision Code Generation

Algorithm

Must be 'active-set'.

ConstraintTolerance

Tolerance on the constraint violation, a positive scalar. The default value is 1e-4.

For optimset, the option name is TolCon. See Current and Legacy Option Names.

MaxIterations

Maximum number of iterations allowed, a nonnegative integer. The default value is 10*(nVar + mConstr), where nVar is the number of problem variables and mConstr is the number of constraints.

ObjectiveLimit

A tolerance (stopping criterion) that is a scalar. If the objective function value goes below ObjectiveLimit and the current point is feasible, the iterations halt because the problem is unbounded, presumably. The default value is -1e20.

OptimalityTolerance

Termination tolerance on the first-order optimality, a positive scalar. The default value is 1e-4. See First-Order Optimality Measure.

For optimset, the name is TolFun. See Current and Legacy Option Names.

StepTolerance

Termination tolerance on x, a positive scalar. The default value is 1e-4.

For optimset, the option name is TolX. See Current and Legacy Option Names.

Problem structure, specified as a structure with these fields:

H

Symmetric matrix in 1/2*x'*H*x

f

Vector in linear term f'*x

Aineq

Matrix in linear inequality constraints Aineq*x  bineq

bineq

Vector in linear inequality constraints Aineq*x  bineq

Aeq

Matrix in linear equality constraints Aeq*x = beq

beq

Vector in linear equality constraints Aeq*x = beq
lbVector of lower bounds
ubVector of upper bounds

x0

Initial point for x

solver

'quadprog'

options

Options created using optimoptions or optimset

The required fields are H, f, solver, and options. When solving, quadprog ignores any fields in problem other than those listed.

Note

You cannot use warm start with the problem argument.

Data Types: struct

Warm start object, specified as an object created using optimwarmstart. The warm start object contains the start point and options, and optional data for memory size in code generation. See Warm Start Best Practices.

Example: ws = optimwarmstart(x0,options)

Output Arguments

collapse all

Solution, returned as a real vector. x is the vector that minimizes 1/2*x'*H*x + f'*x subject to all bounds and linear constraints. x can be a local minimum for nonconvex problems. For convex problems, x is a global minimum. For more information, see Local vs. Global Optima.

Solution warm start object, returned as a QuadprogWarmStart object. The solution point is wsout.X.

You can use wsout as the input warm start object in a subsequent quadprog call.

Objective function value at the solution, returned as a real scalar. fval is the value of 1/2*x'*H*x + f'*x at the solution x.

Reason quadprog stopped, returned as an integer described in this table.

All Algorithms

1

Function converged to the solution x.

0

Number of iterations exceeded options.MaxIterations.

-2

Problem is infeasible. Or, for 'interior-point-convex', the step size was smaller than options.StepTolerance, but constraints were not satisfied.

-3

Problem is unbounded.

'interior-point-convex' Algorithm

2

Step size was smaller than options.StepTolerance, constraints were satisfied.

-6

Nonconvex problem detected.

-8

Unable to compute a step direction.

'trust-region-reflective' Algorithm

4

Local minimum found; minimum is not unique.

3

Change in the objective function value was smaller than options.FunctionTolerance.

-4

Current search direction was not a direction of descent. No further progress could be made.

'active-set' Algorithm

-6

Nonconvex problem detected; projection of H onto the nullspace of Aeq is not positive semidefinite.

Note

Occasionally, the 'active-set' algorithm halts with exit flag 0 when the problem is, in fact, unbounded. Setting a higher iteration limit also results in exit flag 0.

Information about the optimization process, returned as a structure with these fields:

iterations

Number of iterations taken

algorithm

Optimization algorithm used

cgiterations

Total number of PCG iterations ('trust-region-reflective' algorithm only)

constrviolation

Maximum of constraint functions

firstorderopt

Measure of first-order optimality

linearsolver

Type of internal linear solver, 'dense' or 'sparse' ('interior-point-convex' algorithm only)

message

Exit message

Lagrange multipliers at the solution, returned as a structure with these fields:

lower

Lower bounds lb

upper

Upper bounds ub

ineqlin

Linear inequalities

eqlin

Linear equalities

For details, see Lagrange Multiplier Structures.

More About

collapse all

Enhanced Exit Messages

The next few items list the possible enhanced exit messages from quadprog. Enhanced exit messages give a link for more information as the first sentence of the message.

Minimum Found That Satisfies The Constraints

The solver found a minimizing point that satisfies all bounds and linear constraints. Since the problem is convex, the minimizing point is a global minimum. For more information, see Local vs. Global Optima.

Solver Stalled, Constraints Satisfied

The solver stopped because the last step was too small. When the relative step size goes below the StepTolerance tolerance, then the iterations end. Sometimes, this means that the solver located the minimum. However, the first-order optimality measure was not less than the OptimalityTolerance, so it is possible that the result is inaccurate. All constraints were satisfied.

To proceed, try the following:

  • Examine the first-order optimality measure in the output structure. If the first-order optimality measure is small, then it is likely that the returned solution is accurate.

  • Set the StepTolerance option to 0. Sometimes, this setting helps the solver proceed, though sometimes the solver remains stalled because of other issues.

  • Try a different algorithm. If the solver offers a choice of algorithms, sometimes a different algorithm can succeed.

  • Try removing dependent constraints. This means ensure that none of the linear constraints are redundant.

Problem Appears Unbounded

quadprog stopped because it appears to have found a direction that satisfies all constraints and causes the objective to decrease without bound.

To proceed,

  • Ensure that you have finite bound for each component.

  • Check the objective function to ensure that it is strictly convex (the quadratic matrix has strictly positive eigenvalues).

  • See if the associated linear programming problem (the original problem without the quadratic term) has a finite solution.

Unable to Compute a Step Direction

The solver was unable to proceed because it could not compute a direction leading to a minimum. It is likely that this trouble is due to redundant linear constraints or tolerances that are too small.

To proceed,

  • Check your linear constraint matrices for redundancy. Try to identify and remove redundant linear constraints.

  • Ensure that your FunctionTolerance, OptimalityTolerance, and ConstraintTolerance options are above 1e-14, and are preferably above 1e-12. See Tolerances and Stopping Criteria.

The Problem Is Non-Convex

quadprog determined that the problem is not Convex. Try a different algorithm. For more information, see Quadratic Programming Algorithms.

Solution Found During Presolve

The solver found the solution during the presolve phase. This means the bounds, linear constraints, and f (linear objective coefficient) immediately lead to a solution. For more information, see Presolve/Postsolve.

The Problem Is Infeasible

During presolve, the solver found that the problem has an inconsistent formulation. Inconsistent means not all constraints can be satisfied at a single point x. For more information, see Presolve/Postsolve.

The Problem Is Unbounded

During presolve, the solver found a feasible direction where the objective function decreases without bound. For more information, see Presolve/Postsolve.

Converged to an Infeasible Point

quadprog converged to a point that does not satisfy all constraints to within the constraint tolerance called ConstraintTolerance. The reason quadprog stopped is that the last step was too small. When the relative step size goes below the StepTolerance tolerance, then the iterations end.

For suggestions on how to proceed, see quadprog Converges to an Infeasible Point.

No feasible solution found

The solver converged to a point that does not satisfy all constraints to within the constraint tolerance called ConstraintTolerance. The reason the solver stopped is that the last step was too small. When the relative step size goes below the StepTolerance tolerance, then the iterations end.

No feasible solution found

There is no point satisfying all of the bounds and linear constraints. For help examining the inconsistent linear constraints, see Investigate Linear Infeasibilities.

Optimal Solution Found

There is only one feasible point. The number of independent linear equality constraints is the same as the number of variables in the problem.

Optimal Solution Found

The solver stopped because the first-order optimality measure is less than the OptimalityTolerance tolerance.

The first-order optimality measure is the infinity norm of the projected gradient. The projection is onto the null space of the linear equality matrix Aeq.

Local Minimum Found

The solver stopped at a point of zero curvature that is a local minimum. There are other feasible points that have the same objective function value.

The Problem Is Unbounded

There are directions of zero or negative curvature along which the objective function decreases indefinitely. Therefore, for any target value, there are feasible points with objective value smaller than the target. Check whether you included enough constraints in the problem, such as bounds on all variables.

Optimal Solution Found

The solver stopped because the first-order optimality measure is less than the OptimalityTolerance tolerance.

Local Minimum Possible

The solver stopped because the relative change in function value was below the FunctionTolerance tolerance. To check solution quality, see Local Minimum Possible.

Local Minimum Possible

The solver stopped because the relative change in function value was below the square root of the FunctionTolerance tolerance, and the change of function values in the previous iterations is decreasing by less than a factor of 3.5. This criterion stops the solver when the difference of objective function values is relatively small, but does not decrease to zero quickly enough. To check solution quality, see Local Minimum Possible.

Definitions for Exit Messages

The next few items contain definitions for terms in the quadprog exit messages.

tolerance

Generally, a tolerance is a threshold which, if crossed, stops the iterations of a solver. For more information on tolerances, see Tolerances and Stopping Criteria.

Convex

A quadratic program is convex if, from any feasible point, there is no feasible direction with negative curvature. A convex problem has only one local minimum, which is also the global minimum.

Feasible Directions

The feasible directions from a feasible point x are those vectors v such that for small enough positive a, x + av is feasible.

A feasible point is one satisfying all the constraints.

StepTolerance

StepTolerance is a tolerance for the size of the last step, meaning the size of the change in location where fsolve was evaluated.

OptimalityTolerance

The tolerance called OptimalityTolerance relates to the first-order optimality measure. Iterations end when the first-order optimality measure is less than OptimalityTolerance.

For constrained problems, the first-order optimality measure is the maximum of the following two quantities:

xL(x,λ=f(x)+ATλineqlin+AeqTλeqlin                       +λineqnonlin,ici(x)+λeqnonlin,iceqi(x),

|lixi|λlower,i,|xiui|λupper,i,|(Axb)i|λineqlin,i,|ci(x)|λineqnonlin,i.

For unconstrained problems, the first-order optimality measure is the maximum of the absolute value of the components of the gradient vector (also known as the infinity norm).

The first-order optimality measure should be zero at a minimizing point.

For more information, including definitions of all the variables in these equations, see First-Order Optimality Measure.

first-order optimality measure for problems with bounds

For unconstrained problems, the first-order optimality measure is the maximum of the absolute value of the components of the gradient vector (also known as the infinity norm of the gradient). This should be zero at a minimizing point.

For problems with bounds, the first-order optimality measure is the maximum over i of |vi*gi|. Here gi is the ith component of the gradient, x is the current point, and

vi={|xibi|if the negative gradient points toward bound bi1otherwise.

If xi is at a bound, vi is zero. If xi is not at a bound, then at a minimizing point the gradient gi should be zero. Therefore the first-order optimality measure should be zero at a minimizing point.

For more information, see First-Order Optimality Measure.

ConstraintTolerance

The constraint tolerance called ConstraintTolerance is the maximum of the values of all constraint functions at the current point.

ConstraintTolerance operates differently from other tolerances. If ConstraintTolerance is not satisfied (i.e., if the magnitude of the constraint function exceeds ConstraintTolerance), the solver attempts to continue, unless it is halted for another reason. A solver does not halt simply because ConstraintTolerance is satisfied.

Relative Dual Feasibility

The dual feasibility rd is defined in terms of the KKT conditions for the problem. The relative dual feasibility stopping condition is

rdρOptimalityTolerance,(1)

where ρ is a scale factor.

For more information, see Predictor-Corrector.

Dual Feasibility

The KKT conditions state that at an optimum x, there are Lagrange multipliers λ¯ineq and λeq such that

Hx+c+AeqTλeq+A¯Tλ¯ineq=0A¯xb¯+s=0Aeqxbeq=0siλ¯ineq,i=0si0λ¯ineq,i0.

The variables A¯, λ¯ineq, and b¯ include bounds as part of the linear inequalities.

The dual feasibility rd is the absolute value of rd=Hx+c+AeqTλeq+A¯Tλ¯ineq.

Scale Factor

The scale factor ρ is

ρ=max(1,H,A¯,Aeq,c,b¯,beq).

The norm is the maximum absolute value of the elements in the expression.

Complementarity Measure

The complementarity measure is defined in terms of the KKT conditions for the problem. At an optimum x, there are Lagrange multipliers λ¯ineq and λeq such that

Hx+c+AeqTλeq+A¯Tλ¯ineq=0A¯xb¯+s=0Aeqxbeq=0siλ¯ineq,i=0si0λ¯ineq,i0.

The variables A¯, λ¯ineq, and b¯ include bounds as part of the linear inequalities.

The complementarity measure is :

isiλ¯ineq,i.

For more information, see Predictor-Corrector.

Total Relative Error

The total relative error is defined in terms of the KKT conditions for the problem. The total relative error stopping condition holds when the Merit Function φ satisfies

φ ≥ max(OptimalityTolerance,105φmin).(2)

When this stopping condition holds, the solver determines that the quadratic program is infeasible.

Merit Function

The KKT conditions state that at an optimum x, there are Lagrange multipliers λ¯ineq and λeq such that

Hx+c+AeqTλeq+A¯Tλ¯ineq=0A¯xb¯+s=0Aeqxbeq=0siλ¯ineq,i=0si0λ¯ineq,i0.

The variables A¯, λ¯ineq, and b¯ include bounds as part of the linear inequalities.

The merit function φ is

1ρ(max(req,rineq,rd)+g).

The terms in the definition of φ are:

ρ=max(1,H,A¯,Aeq,c,b¯,beq)req=Aeqxbeqrineq=A¯xb¯+srd=Hx+c+AeqTλeq+A¯Tλ¯ineqg=xTHx+fTxb¯λ¯ineqbeqλeq.

The expression φmin means the minimum of φ seen in all iterations.

Presolve

Presolve is a set of algorithms that simplify a linear or quadratic programming problem. The algorithms look for simple inconsistencies such as inconsistent bounds and linear constraints. They also look for redundant bounds and linear inequalities. For more information, see Presolve/Postsolve.

The Problem Appears to Be Ill-Conditioned

The internally-calculated search direction does not decrease the objective function value. Perhaps the problem is poorly scaled or has an ill-conditioned matrix (H for quadprog, C for lsqlin). For suggestions on how to proceed, see When the Solver Fails or Local Minimum Possible.

Algorithms

collapse all

'interior-point-convex'

The 'interior-point-convex' algorithm attempts to follow a path that is strictly inside the constraints. It uses a presolve module to remove redundancies and to simplify the problem by solving for components that are straightforward.

The algorithm has different implementations for a sparse Hessian matrix H and for a dense matrix. Generally, the sparse implementation is faster on large, sparse problems, and the dense implementation is faster on dense or small problems. For more information, see interior-point-convex quadprog Algorithm.

'trust-region-reflective'

The 'trust-region-reflective' algorithm is a subspace trust-region method based on the interior-reflective Newton method described in [1]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). For more information, see trust-region-reflective quadprog Algorithm.

'active-set'

The 'active-set' algorithm is a projection method, similar to the one described in [2]. The algorithm is not large-scale; see Large-Scale vs. Medium-Scale Algorithms. For more information, see active-set quadprog Algorithm.

Warm Start

A warm start object maintains a list of active constraints from the previous solved problem. The solver carries over as much active constraint information as possible to solve the current problem. If the previous problem is too different from the current one, no active set information is reused. In this case, the solver effectively executes a cold start in order to rebuild the list of active constraints.

Alternative Functionality

App

The Optimize Live Editor task provides a visual interface for quadprog.

References

[1] Coleman, T. F., and Y. Li. “A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables.” SIAM Journal on Optimization. Vol. 6, Number 4, 1996, pp. 1040–1058.

[2] Gill, P. E., W. Murray, and M. H. Wright. Practical Optimization. London: Academic Press, 1981.

[3] Gould, N., and P. L. Toint. “Preprocessing for quadratic programming.” Mathematical Programming. Series B, Vol. 100, 2004, pp. 95–132.

Extended Capabilities

Version History

Introduced before R2006a

expand all