The solver stopped because it reached a limit on the number of iterations or function evaluations before it minimized the objective to the requested tolerance. To proceed, try one or more of the following.
Set the Display
option to 'iter'
.
This setting shows the results of the solver iterations.
To enable iterative display:
Using the Optimization app, choose Level
of display to be iterative
or iterative
with detailed message
.
At the MATLAB® command line, enter
options = optimoptions('solvername','Display','iter');
Call the solver using the options
structure.
For an example of iterative display, see Interpreting the Result.
What to Look For in Iterative Display.
See if the objective function (Fval
or f(x)
or Resnorm
)
decreases. Decrease indicates progress.
Examine constraint violation (Max constraint
)
to ensure that it decreases towards 0
. Decrease
indicates progress.
See if the first-order optimality decreases towards 0
.
Decrease indicates progress.
See if the Trust-region radius
decreases
to a small value. This decrease indicates that the objective might
not be smooth.
What to Do.
If the solver seemed to progress:
Set MaxIter
and/or MaxFunEvals
to
values larger than the defaults. You can see the default values in
the Optimization app, or in the Options table in the solver's function
reference pages.
Start the solver from its last calculated point.
If the solver is not progressing, try the other listed suggestions.
If TolX
or TolFun
, for
example, are too small, the solver might not recognize when it has
reached a minimum; it can make futile iterations indefinitely.
To change tolerances using the Optimization app, use the Stopping criteria list at the top of the Options pane.
To change tolerances at the command line, use optimoptions
as described in Set and Change Options.
The DiffMaxChange
and DiffMinChange
options
can affect a solver's progress. These options control the step size
in finite differencing for derivative estimation.
For example, check that your objective and nonlinear constraint functions return the correct values at some points. See Check your Objective and Constraint Functions. Check that an infeasible point does not cause an error in your functions; see Iterations Can Violate Constraints.
Solvers run more reliably when each coordinate has about the same effect on the objective and constraint functions. Multiply your coordinate directions with appropriate scalars to equalize the effect of each coordinate. Add appropriate values to certain coordinates to equalize their size.
Example: Centering and Scaling. Consider minimizing 1e6*x(1)^2 + 1e-6*x(2)^2
:
f = @(x) 10^6*x(1)^2 + 10^-6*x(2)^2;
Minimize f
using the fminunc
'quasi-newton'
algorithm:
opts = optimoptions('fminunc','Display','none','Algorithm','quasi-newton'); x = fminunc(f,[0.5;0.5],opts) x = 0 0.5000
The result is incorrect; poor scaling interfered with obtaining a good solution.
Scale the problem. Set
D = diag([1e-3,1e3]); fr = @(y) f(D*y); y = fminunc(fr, [0.5;0.5], opts) y = 0 0 % the correct answer
Similarly, poor centering can interfere with a solution.
fc = @(z)fr([z(1)-1e6;z(2)+1e6]); % poor centering z = fminunc(fc,[.5 .5],opts) z = 1.0e+005 * 10.0000 -10.0000 % looks good, but... z - [1e6 -1e6] % checking how close z is to 1e6 ans = -0.0071 0.0078 % reveals a distance fcc = @(w)fc([w(1)+1e6;w(2)-1e6]); % centered w = fminunc(fcc,[.5 .5],opts) w = 0 0 % the correct answer
If you do not provide gradients or Jacobians, solvers estimate gradients and Jacobians by finite differences. Therefore, providing these derivatives can save computational time, and can lead to increased accuracy.
For constrained problems, providing a gradient has another advantage.
A solver can reach a point x
such that x
is
feasible, but finite differences around x
always
lead to an infeasible point. In this case, a solver can fail or halt
prematurely. Providing a gradient allows a solver to proceed.
Provide gradients or Jacobians in the files for your objective function and nonlinear constraint functions. For details of the syntax, see Writing Scalar Objective Functions, Writing Vector and Matrix Objective Functions, and Nonlinear Constraints.
To check that your gradient or Jacobian function is correct,
use the DerivativeCheck
option, as described in Checking Validity of Gradients or Jacobians.
If you have a Symbolic Math Toolbox™ license, you can calculate gradients and Hessians programmatically. For an example, see Symbolic Math Toolbox Calculates Gradients and Hessians.
For examples using gradients and Jacobians, see Minimization with Gradient and Hessian, Nonlinear Constraints with Gradients, Symbolic Math Toolbox Calculates Gradients and Hessians, Nonlinear Equations with Analytic Jacobian, and Nonlinear Equations with Jacobian.
Solvers often run more reliably and with fewer iterations when you supply a Hessian.
The following solvers and algorithms accept Hessians:
fmincon
interior-point
.
Write the Hessian as a separate function. For an example, see fmincon Interior-Point Algorithm with Analytic Hessian.
fmincon
trust-region-reflective
.
Give the Hessian as the third output of the objective function. For
an example, see Minimization with Dense Structured Hessian, Linear Equalities.
fminunc
trust-region
.
Give the Hessian as the third output of the objective function. For
an example, see Minimization with Gradient and Hessian.
If you have a Symbolic Math Toolbox license, you can calculate gradients and Hessians programmatically. For an example, see Symbolic Math Toolbox Calculates Gradients and Hessians.
Usually, you get this result because the solver was unable to
find a point satisfying all constraints to within the TolCon
constraint
tolerance. However, the solver might have located or started at a
feasible point, and converged to an infeasible point. If the solver
lost feasibility, see Solver Lost Feasibility.
To proceed when the solver found no feasible point, try one or more of the following.
1. Check Linear Constraints |
2. Check Nonlinear Constraints |
Try finding a point that satisfies the bounds and linear constraints by solving a linear programming problem.
Define a linear programming problem with an objective function that is always zero:
f = zeros(size(x0)); % assumes x0 is the initial point
Solve the linear programming problem to see if there is a feasible point:
xnew = linprog(f,A,b,Aeq,beq,lb,ub);
If there is a feasible point xnew
,
use xnew
as the initial point and rerun your original
problem.
If there is no feasible point, your problem is not well-formulated. Check the definitions of your bounds and linear constraints.
After ensuring that your bounds and linear constraints are feasible (contain a point satisfying all constraints), check your nonlinear constraints.
Set your objective function to zero:
@(x)0
Run your optimization with all constraints and with the zero
objective. If you find a feasible point xnew
, set x0
= xnew
and rerun your original problem.
If you do not find a feasible point using a zero objective function, use the zero objective function with several initial points.
If you find a feasible point xnew
,
set x0 = xnew
and rerun your original problem.
If you do not find a feasible point, try relaxing the constraints, discussed next.
Try relaxing your nonlinear inequality constraints, then tightening them.
Change the nonlinear constraint function c
to
return c-
Δ, where Δ is a positive number.
This change makes your nonlinear constraints easier to satisfy.
Look for a feasible point for the new constraint function, using either your original objective function or the zero objective function.
If you find a feasible point,
Reduce Δ
Look for a feasible point for the new constraint function, starting at the previously found point.
If you do not find a feasible point, try increasing Δ and looking again.
If you find no feasible point, your problem might be truly infeasible, meaning that no solution exists. Check all your constraint definitions again.
If the solver started at a feasible point, but converged to an infeasible point, try the following techniques.
Try a different algorithm. The fmincon
'sqp'
and 'interior-point'
algorithms
are usually the most robust, so try one or both of them first.
Tighten the bounds. Give the highest lb
and
lowest ub
vectors that you can. This can help the
solver to maintain feasibility. The fmincon
'sqp'
and 'interior-point'
algorithms
obey bounds at every iteration, so tight bounds help throughout the
optimization.
The solver reached a point whose objective function was less than the objective limit tolerance.
Your problem might be truly unbounded. In other words, there is a sequence of points xi with
lim f(xi) = –∞.
and such that all the xi satisfy the problem constraints.
Check that your problem is formulated correctly. Solvers try to minimize objective functions; if you want a maximum, change your objective function to its negative. For an example, see Maximizing an Objective.
Try scaling or centering your problem. See Center and Scale Your Problem.
Relax the objective limit tolerance by using optimoptions
to
reduce the value of the ObjectiveLimit
tolerance.
fsolve
can fail to solve an equation for
various reasons. Here are some suggestions for how to proceed:
Try Changing the Initial Point. fsolve
relies
on an initial point. By giving it different initial points, you increase
the chances of success.
Check the definition of the equation to make sure
that it is smooth. fsolve
might fail to converge
for equations with discontinuous gradients, such as absolute value. fsolve
can
fail to converge for functions with discontinuities.
Check that the equation is "square," meaning equal dimensions for input and output (has the same number of unknowns as values of the equation).
Change tolerances, especially TolFun
and TolX
.
If you attempt to get high accuracy by setting tolerances to very
small values, fsolve
can fail to converge. If
you set tolerances that are too high, fsolve
can
fail to solve an equation accurately.
Check the problem definition. Some problems have no
real solution, such as x^2 + 1 = 0
.