The following table describes optimization options. Create options
using the optimoptions
function,
or optimset
for fminbnd
, fminsearch
, fzero
,
or lsqnonneg
.
See the individual function reference pages for information about available option values and defaults.
The default values for the options vary depending on which optimization
function you call with options
as an input argument.
You can determine the default option values for any of the optimization
functions by entering optimoptions(@
or
the equivalent solvername
)optimoptions('
.
For example,solvername
')
optimoptions(@fmincon)
returns a list of the options and the default values for the
default 'interior-point'
fmincon
algorithm.
To find the default values for another fmincon
algorithm,
set the Algorithm
option. For example,
opts = optimoptions(@fmincon,'Algorithm','sqp')
Optimization Options
Option Name | Description | Used by Functions | Restrictions | |
---|---|---|---|---|
Algorithm | Chooses the algorithm used by the solver. | fmincon , fminunc , fsolve , linprog , lsqcurvefit , lsqlin , lsqnonlin , quadprog | ||
AlwaysHonorConstraints | The default | fmincon | ||
BranchingRule | Rule for choosing the component for branching:
| intlinprog | optimoptions only | |
CutGeneration | Level of cut generation (see Cut Generation):
| intlinprog | optimoptions only | |
CutGenMaxIter | Number of passes through all cut generation methods before
entering the branch-and-bound phase, an integer from 1 through 50 .
Disable cut generation by setting the CutGeneration option
to 'none' . | intlinprog | optimoptions only | |
DerivativeCheck | Compare user-supplied analytic derivatives (gradients or Jacobian, depending on the selected solver) to finite differencing derivatives. |
| ||
Diagnostics | Display diagnostic information about the function to be minimized or solved. | All but | ||
| Maximum change in variables for finite differencing. |
| ||
| Minimum change in variables for finite differencing. |
| ||
Display | Level of display.
| All. See the individual function reference pages for the values that apply. | ||
FinDiffRelStep | Scalar or vector step size factor for finite differences. When
you set
where
Scalar |
| ||
FinDiffType | Finite differences, used to estimate gradients, are either |
| ||
FunValCheck | Check whether objective function and constraints values
are valid.
|
| ||
GoalsExactAchieve | Specify the number of objectives required for the objective | |||
GradConstr | User-defined gradients for the nonlinear constraints. | |||
GradObj | User-defined gradients for the objective functions. | |||
HessFcn | Function handle to a user-supplied Hessian (see Hessian). | fmincon | ||
Hessian | If | |||
HessMult | Handle to a user-supplied Hessian multiply function.
For | |||
HessPattern | Sparsity pattern of the Hessian for finite differencing.
The size of the matrix is n-by-n, where n is the number of elements
in | |||
HessUpdate | Quasi-Newton updating scheme. | |||
Heuristics | Algorithm for searching for feasible points (see Heuristics for Finding Feasible Solutions):
| intlinprog | optimoptions only | |
HeuristicsMaxNodes | Strictly positive integer that bounds the number of nodes intlinprog can
explore in its branch-and-bound search for feasible points. See Heuristics for Finding Feasible Solutions. | intlinprog | optimoptions only | |
InitBarrierParam | Initial barrier value. | fmincon | ||
InitDamping | Initial Levenberg-Marquardt parameter. | fsolve , lsqcurvefit , lsqnonlin | optimoptions only | |
InitialHessMatrix This option will be removed in a future release. | Initial quasi-Newton matrix. | optimset only | ||
InitialHessType This option will be removed in a future release. | Initial quasi-Newton matrix type. | optimset only | ||
InitTrustRegionRadius | Initial radius of the trust region. | fmincon | ||
IPPreprocess | Types of integer preprocessing (see Mixed-Integer Program Preprocessing):
| intlinprog | optimoptions only | |
Jacobian | If | |||
JacobMult | User-defined Jacobian multiply function. Ignored unless | |||
JacobPattern | Sparsity pattern of the Jacobian for finite differencing.
The size of the matrix is | |||
Use | Use large-scale algorithm if possible. | optimset only | ||
LPMaxIter | Strictly positive integer, the maximum number of simplex algorithm iterations per node during the branch-and-bound process. | intlinprog | optimoptions only | |
LPPreprocess | Type of preprocessing for the solution to the relaxed linear
program (see Linear Program Preprocessing):
| intlinprog | optimoptions only | |
MaxFunEvals | Maximum number of function evaluations allowed. |
| ||
MaxIter | Maximum number of iterations allowed. | |||
MaxNodes | Strictly positive integer that is the maximum number of nodes the solver explores in its branch-and-bound process. | |||
MaxNumFeasPoints | Strictly positive integer. intlinprog stops
if it finds MaxNumFeasPoints integer feasible points. | intlinprog | optimoptions only | |
MaxPCGIter | Maximum number of iterations of preconditioned conjugate gradients method allowed. |
| ||
MaxProjCGIter | A tolerance for the number of projected conjugate gradient iterations; this is an inner iteration, not the number of iterations of the algorithm. | fmincon | ||
MaxSQPIter | Maximum number of iterations of sequential quadratic programming method allowed. | |||
MaxTime | Maximum amount of time in seconds allowed for the algorithm. | |||
MeritFunction | Use goal attainment/minimax merit function (multiobjective)
vs. | |||
MinAbsMax | Number of F(x) to minimize the worst case absolute values. | |||
NodeSelection | Choose the node to explore next.
| intlinprog | optimoptions only | |
ObjectiveCutOff | Real greater than -Inf . The default is Inf . | intlinprog | optimoptions only | |
ObjectiveLimit | If the objective function value goes below | fmincon , fminunc , quadprog | ||
OutputFcn | Specify one or more user-defined functions that the optimization
function calls at each iteration. See Output Function or |
| ||
PlotFcns | Plots various measures of progress while the algorithm executes, select from predefined plots or write your own.
See Plot Functions or |
| ||
PrecondBandWidth | Upper bandwidth of preconditioner for PCG. Setting to |
| ||
Preprocess | Level of LP preprocessing prior to simplex or dual simplex algorithm iterations. | optimoptions only | ||
RelLineSrchBnd | Relative bound on line search step length. | |||
RelLineSrchBndDuration | Number of iterations for which the bound specified in | |||
RelObjThreshold | Nonnegative real. intlinprog changes the
current feasible solution only when it locates another with an objective
function value that is at least RelObjThreshold lower: (fold
– fnew)/(1 + fold) > RelObjThreshold. | intlinprog | optimoptions only | |
RootLPAlgorithm | Algorithm for solving linear programs:
| intlinprog | optimoptions only | |
RootLPMaxIter | Nonnegative integer that is the maximum number of simplex algorithm iterations to solve the initial linear programming problem. | intlinprog | optimoptions only | |
ScaleProblem | For For
the other solvers, when using the | fmincon , fsolve , lsqcurvefit , lsqnonlin , quadprog | ||
Use | If | optimset only | ||
SubproblemAlgorithm | Determines how the iteration step is calculated. | fmincon | ||
TolCon | Tolerance on the constraint violation. |
| ||
TolConSQP | Constraint violation tolerance for the inner SQP iteration. | fgoalattain , fmincon , fminimax , fseminf | ||
TolFun | Termination tolerance on the function value. |
| ||
TolFunLP | Nonnegative real where reduced costs must exceed TolFunLP for
a variable to be taken into the basis. | intlinprog | optimoptions only | |
TolGapAbs | Nonnegative real. intlinprog stops if
the difference between the internally calculated upper (U )
and lower (L ) bounds on the objective function
is less than or equal to TolGapAbs :
| intlinprog | optimoptions only | |
TolGapRel | Real from 0 through 1 . intlinprog stops
if the relative difference between the internally calculated upper
(U ) and lower (L ) bounds on
the objective function is less than or equal to TolGapRel :
| intlinprog | optimoptions only | |
TolInteger | Real from 1e-6 through 1e-3 ,
where the maximum deviation from integer that a component of the solution x can
have and still be considered an integer. TolInteger is
not a stopping criterion. | intlinprog | optimoptions only | |
TolPCG | Termination tolerance on the PCG iteration. |
| ||
TolProjCG | A relative tolerance for projected conjugate gradient algorithm; this is for an inner iteration, not the algorithm iteration. | fmincon | ||
TolProjCGAbs | Absolute tolerance for projected conjugate gradient algorithm; this is for an inner iteration, not the algorithm iteration. | fmincon | ||
TolX | Termination tolerance on x. | All functions except | ||
TypicalX | Array that specifies typical magnitude of array of parameters |
| ||
UseParallel | When |
The Outputfcn
field of options
specifies
one or more functions that an optimization function calls at each
iteration. Typically, you might use an output function to plot points
at each iteration or to display optimization quantities from the algorithm.
Using an output function you can view, but not set, optimization quantities.
Caution
|
To set up an output function, do the following:
Write the output function as a function file or local function.
Use optimoptions
to
set the value of Outputfcn
to be a function handle,
that is, the name of the function preceded by the @ sign. For example,
if the output function is outfun.m
, the command
options = optimoptions(@solvername,'OutputFcn', @outfun);
specifies OutputFcn
to be the handle to outfun
.
To specify more than one output function, use the syntax
options = optimoptions(@solvername,'OutputFcn',{@outfun, @outfun2});
Call the optimization function with options
as
an input argument.
See Output Functions for an example of an output function.
Passing Extra Parameters explains
how to parameterize the output function OutputFcn
,
if necessary.
The function definition line of the output function has the following form:
stop = outfun(x, optimValues, state)
where
x
is the point computed by the
algorithm at the current iteration.
optimValues
is a structure containing
data from the current iteration. Fields in optimValues describes the structure in detail.
state
is the current state of the
algorithm. States of the Algorithm lists
the possible values.
stop
is a flag that is true
or false
depending
on whether the optimization routine should quit or continue. See Stop Flag for more information.
The optimization function passes the values of the input arguments
to outfun
at each iteration.
The following table lists the fields of the optimValues
structure.
A particular optimization function returns values for only some of
these fields. For each field, the Returned by Functions column of
the table lists the functions that return the field.
Corresponding Output Arguments. Some of the fields of optimValues
correspond
to output arguments of the optimization function. After the final
iteration of the optimization algorithm, the value of such a field
equals the corresponding output argument. For example, optimValues.fval
corresponds
to the output argument fval
. So, if you call fmincon
with
an output function and return fval
, the final value
of optimValues.fval
equals fval
.
The Description column of the following table indicates the fields
that have a corresponding output argument.
Command-Line Display. The values of some fields of optimValues
are
displayed at the command line when you call the optimization function
with the Display
field of options
set
to 'iter'
, as described in Iterative Display. For example, optimValues.fval
is
displayed in the f(x)
column. The Command-Line
Display column of the following table indicates the fields that you
can display at the command line.
Some optimValues
fields apply only to specific
algorithms:
AS — active-set
D — trust-region-dogleg
IP — interior-point
LM — levenberg-marquardt
Q — quasi-newton
SQP — sqp
TR — trust-region
TRR — trust-region-reflective
Some optimValues
fields exist in certain
solvers or algorithms, but are always filled with empty or zero values,
so are meaningless. These fields include:
constrviolation
for fminunc
TR
and fsolve
TRR
.
procedure
for fmincon
TRR
and SQP
,
and for fminunc
.
optimValues Fields
OptimValues Field (optimValues.field) | Description | Returned by Functions | Command-Line Display |
---|---|---|---|
| Attainment factor for multiobjective problem. For details, see Goal Attainment Method. | None | |
| Number of conjugate gradient iterations at current optimization iteration. |
|
See Iterative Display. |
| Maximum constraint violation. |
|
See Iterative Display. |
| Measure of degeneracy. A point is degenerate if The partial derivative with respect to one of the variables is 0 at the point. A bound constraint is active for that variable at the point. See Degeneracy. |
| None |
| Directional derivative in the search direction. |
|
See Iterative Display. |
| First-order optimality (depends on algorithm). Final
value equals optimization function output |
|
See Iterative Display. |
| Cumulative number of function evaluations. Final value
equals optimization function output |
|
See Iterative Display. |
| Function value at current point. Final value equals optimization
function output For |
|
See Iterative Display. |
| Current gradient of objective function — either
analytic gradient if you provide it or finite-differencing approximation.
Final value equals optimization function output |
| None |
| Iteration number — starts at |
|
See Iterative Display. |
| The Levenberg-Marquardt parameter, | |
|
| Actual step length divided by initially predicted step length |
See Iterative Display. | |
| Maximum function value | fminimax | None |
|
|
| None |
| Procedure messages. |
|
See Iterative Display. |
| Ratio of change in the objective function to change in the quadratic approximation. |
| None |
| The residual vector. |
See Iterative Display. | |
| 2-norm of the residual squared. |
See Iterative Display. | |
| Search direction. |
| None |
| Status of the current trust-region step. Returns true if the current trust-region step was successful, and false if the trust-region step was unsuccessful. | | None |
| Current step size (displacement in |
|
See Iterative Display. |
| Radius of trust region. |
|
See Iterative Display. |
Degeneracy. The value of the field degenerate
, which
measures the degeneracy of the current optimization point x
,
is defined as follows. First, define a vector r
,
of the same size as x
, for which r(i)
is
the minimum distance from x(i)
to the ith
entries of the lower and upper bounds, lb
and ub
.
That is,
r = min(abs(ub-x, x-lb))
Then the value of degenerate
is the minimum
entry of the vector r + abs(grad)
,
where grad
is the gradient of the objective function.
The value of degenerate
is 0 if there is an index i
for
which both of the following are true:
grad(i) = 0
x(i)
equals the ith
entry of either the lower or upper bound.
The following table lists the possible values for state
:
State | Description |
---|---|
| The algorithm is in the initial state before the first iteration. |
| The algorithm is in some computationally expensive part
of the iteration. In this state, the output function can interrupt
the current iteration of the optimization. At this time, the values
of |
| The algorithm is at the end of an iteration. |
| The algorithm is in the final state after the last iteration. |
The following code illustrates how the output function might
use the value of state
to decide which tasks to
perform at the current iteration:
switch state case 'iter' % Make updates to plot or guis as needed case 'interrupt' % Probably no action here. Check conditions to see % whether optimization should quit. case 'init' % Setup for plots or guis case 'done' % Cleanup of plots, guis, or final plot otherwise end
The output argument stop
is a flag that is true
or false
.
The flag tells the optimization function whether the optimization
should quit or continue. The following examples show typical ways
to use the stop
flag.
Stopping an Optimization Based on Data in optimValues. The output function can stop an optimization at any iteration
based on the current data in optimValues
. For example,
the following code sets stop
to true
if
the directional derivative is less than .01
:
function stop = outfun(x,optimValues,state) stop = false; % Check if directional derivative is less than .01. if optimValues.directionalderivative < .01 stop = true; end
Stopping an Optimization Based on GUI Input. If you design a GUI to perform optimizations, you can make the
output function stop an optimization when a user clicks a Stop button
on the GUI. The following code shows how to do this, assuming that
the Stop button callback stores the value true
in
the optimstop
field of a handles
structure
called hObject
:
function stop = outfun(x,optimValues,state) stop = false; % Check if user has requested to stop the optimization. stop = getappdata(hObject,'optimstop');
The PlotFcns
field of the options
structure
specifies one or more functions that an optimization function calls
at each iteration to plot various measures of progress while the algorithm
executes. The structure of a plot function is the same as that for
an output function. For more information on writing and calling a
plot function, see Output Function.
For an example of using built-in plot functions, Using a Plot Function.
To view a predefined plot function listed for PlotFcns
,
you can open it in the MATLAB® Editor. For example, to view the
file corresponding to the norm of residuals, enter:
edit optimplotresnorm.m