The following table is designed to help you choose a solver. It does not address multiobjective optimization or equation solving. There are more details on all the solvers in Problems Handled by Optimization Toolbox Functions.
Use the table as follows:
Identify your objective function as one of five types:
Linear
Quadratic
Sum-of-squares (Least squares)
Smooth nonlinear
Nonsmooth
Identify your constraints as one of five types:
None (unconstrained)
Bound
Linear (including bound)
General smooth
Discrete (integer)
Use the table to identify a relevant solver.
In this table:
* means relevant solvers are found in Global Optimization Toolbox functions (licensed separately from Optimization Toolbox™ solvers).
fmincon
applies to most smooth
objective functions with smooth constraints. It is not listed as a
preferred solver for least squares or linear or quadratic programming
because the listed solvers are usually more efficient.
The table has suggested functions, but it is not meant
to unduly restrict your choices. For example, fmincon
can
be effective on some nonsmooth problems.
The Global Optimization Toolbox ga
function can address mixed-integer programming
problems.
Solvers by Objective and Constraint
Constraint Type | Objective Type | ||||
---|---|---|---|---|---|
Linear | Quadratic | Least Squares | Smooth nonlinear | Nonsmooth | |
None | n/a (f = const, or min = ) | quadprog , Information | \ , lsqcurvefit , lsqnonlin , Information | fminsearch , fminunc , Information | fminsearch ,
* |
Bound | linprog , Information | quadprog , Information | lsqcurvefit , lsqlin , lsqnonlin , lsqnonneg , Information | fminbnd , fmincon , fseminf , Information | fminbnd ,
* |
Linear | linprog , Information | quadprog , Information | lsqlin , Information | fmincon , fseminf , Information | * |
General smooth | fmincon , Information | fmincon , Information | fmincon , Information | fmincon , fseminf , Information | * |
Discrete | intlinprog , Information | * | * | * | * |
Note: This table does not list multiobjective solvers nor equation solvers. See Problems Handled by Optimization Toolbox Functions for a complete list of problems addressed by Optimization Toolbox functions. |
Note: Some solvers have several algorithms. For help choosing, see Choosing the Algorithm. |
fmincon
has four algorithm options:
'interior-point'
(default)
'trust-region-reflective'
'sqp'
'active-set'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations |
---|
|
Reasoning Behind the Recommendations.
'interior-point'
handles large,
sparse problems, as well as small dense problems. The algorithm satisfies
bounds at all iterations, and can recover from NaN
or Inf
results.
It is a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms. The algorithm can
use special techniques for large-scale problems. For details, see Interior-Point Algorithm in fmincon
options
.
'sqp'
satisfies bounds at all iterations.
The algorithm can recover from NaN
or Inf
results.
It is not a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms.
'active-set'
can take large steps,
which adds speed. The algorithm is effective on some problems with
nonsmooth constraints. It is not a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms.
'trust-region-reflective'
requires
you to provide a gradient, and allows only bounds or linear equality
constraints, but not both. Within these limitations, the algorithm
handles both large sparse problems and small dense problems efficiently.
It is a large-scale algorithm; see Large-Scale vs. Medium-Scale Algorithms. The algorithm can
use special techniques to save memory usage, such as a Hessian multiply
function. For details, see Trust-Region-Reflective
Algorithm in fmincon
options
.
fsolve
has three algorithms:
'trust-region-dogleg'
(default)
'trust-region-reflective'
'levenberg-marquardt'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations |
---|
|
Reasoning Behind the Recommendations.
'trust-region-dogleg'
is the only
algorithm that is specially designed to solve nonlinear equations.
The others attempt to minimize the sum of squares of the function.
The 'trust-region-reflective'
algorithm
is effective on sparse problems. It can use special techniques such
as a Jacobian multiply function for large-scale problems.
fminunc
has two algorithms:
'trust-region'
(formerly LargeScale
= 'on'
), the default
'quasi-newton'
(formerly LargeScale
= 'off'
)
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations |
---|
For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. |
lsqlin. lsqlin
has three algorithms:
'trust-region-reflective'
(formerly LargeScale
= 'on'
), the default
'active-set'
(formerly LargeScale
= 'off'
)
'interior-point'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations |
---|
For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. |
lsqcurvefit and lsqnonlin. lsqcurvefit
and lsqnonlin
have
two algorithms:
'trust-region-reflective'
(default)
'levenberg-marquardt'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations |
---|
For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. |
linprog
has five algorithms:
'interior-point-legacy'
, the default
'interior-point'
'dual-simplex'
'simplex'
'active-set'
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations |
---|
Use the For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. |
Reasoning Behind the Recommendations.
The 'interior-point-legacy'
, 'interior-point'
and 'dual-simplex'
algorithms
are large-scale algorithms, while 'simplex'
and 'active-set'
are
not. See Large-Scale vs. Medium-Scale Algorithms.
Often, the 'interior-point'
and 'dual-simplex'
algorithms
are faster and use less memory than the other algorithms.
The default 'interior-point-legacy'
is
similar to 'interior-point'
, but 'interior-point-legacy'
can
be slower, less robust, or use more memory.
The 'active-set'
and 'simplex'
algorithms
will be removed in a future release.
quadprog
has three algorithms:
'interior-point-convex'
(default)
'trust-region-reflective'
'active-set'
(will be removed
in a future release).
Use optimoptions
to set
the Algorithm
option at the command line.
Recommendations |
---|
For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. |
An optimization algorithm is large scale when it uses linear algebra that does not need to store, nor operate on, full matrices. This may be done internally by storing sparse matrices, and by using sparse linear algebra for computations whenever possible. Furthermore, the internal algorithms either preserve sparsity, such as a sparse Cholesky decomposition, or do not generate matrices, such as a conjugate gradient method.
In contrast, medium-scale methods internally create full matrices and use dense linear algebra. If a problem is sufficiently large, full matrices take up a significant amount of memory, and the dense linear algebra may require a long time to execute.
Don't let the name "large scale" mislead you; you can use a large-scale algorithm on a small problem. Furthermore, you do not need to specify any sparse matrices to use a large-scale algorithm. Choose a medium-scale algorithm to access extra functionality, such as additional constraint types, or possibly for better performance.
Interior-point algorithms in fmincon
, quadprog
,
and linprog
have many good characteristics, such
as low memory usage and the ability to solve large problems quickly.
However, their solutions can be slightly less accurate than those
from other algorithms. The reason for this potential inaccuracy is
that the (internally calculated) barrier function keeps iterates away
from inequality constraint boundaries.
For most practical purposes, this inaccuracy is usually quite small.
To reduce the inaccuracy, try to:
Rerun the solver with smaller TolX
, TolFun
,
and possibly TolCon
tolerances (but keep the tolerances
sensible.) See Tolerances and Stopping Criteria).
Run a different algorithm, starting from the interior-point
solution. This can fail, because some algorithms can use excessive
memory or time, and some linprog
and quadprog
algorithms
do not accept an initial point.
For example, try to minimize the function x when
bounded below by 0. Using the fmincon
interior-point
algorithm:
options = optimoptions(@fmincon,'Algorithm','interior-point','Display','off'); x = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)
x = 2.0000e-08
Using the fmincon
sqp
algorithm:
options.Algorithm = 'sqp';
x2 = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)
x2 = 0
Similarly, solve the same problem using the linprog
interior-point
algorithm:
opts = optimoptions(@linprog,'Display','off','Algorithm','interior-point'); x = linprog(1,[],[],[],[],0,[],1,opts)
x = 2.0833e-13
Using the linprog
simplex
algorithm:
opts.Algorithm = 'simplex';
x2 = linprog(1,[],[],[],[],0,[],1,opts)
x2 = 0
In these cases, the interior-point
algorithms
are less accurate, but the answers are quite close to the correct
answer.
The following tables show the functions available for minimization, equation solving, multiobjective optimization, and solving least-squares or data-fitting problems.
Minimization Problems
Type | Formulation | Solver |
---|---|---|
Scalar minimization | such that lb < x < ub (x is scalar) | fminbnd |
Unconstrained minimization | ||
Linear programming | such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub | |
Mixed-integer linear programming | such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub, x(intcon) is integer-valued. | |
Quadratic programming | such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub | |
Constrained minimization | such that c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub | |
Semi-infinite minimization | such that K(x,w) ≤ 0 for all w, c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub |
Multiobjective Problems
Type | Formulation | Solver |
---|---|---|
Goal attainment | such that F(x) – w·γ ≤ goal, c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub | |
Minimax | such that c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub |
Equation Solving Problems
Type | Formulation | Solver |
---|---|---|
Linear equations | C·x = d, n equations, n variables |
|
Nonlinear equation of one variable | f(x) = 0 | |
Nonlinear equations | F(x) = 0, n equations, n variables |
Least-Squares (Model-Fitting) Problems
Type | Formulation | Solver |
---|---|---|
Linear least-squares | m equations, n variables |
|
Nonnegative linear-least-squares | such that x ≥ 0 | |
Constrained linear-least-squares | such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub | |
Nonlinear least-squares | such that lb ≤ x ≤ ub | |
Nonlinear curve fitting | such that lb ≤ x ≤ ub |