fsolve
solves systems of nonlinear equations.
However, it does not allow you to include any constraints, even bound
constraints. The question is, how can you solve systems of nonlinear
equations when you have constraints?
The short answer is, there are no guarantees that a solution exists that satisfies your constraints. There is no guarantee that any solution exists, even one that does not satisfy your constraints. Nevertheless, there are techniques that can help you search for solutions that satisfy your constraints.
To illustrate the techniques, consider how to solve the equations
(11-9) |
where the components of x must be nonnegative. Clearly, there are four solutions to the equations:
x = (–1,–2)
x =
(10,–2),
x = (–1,20),
x = (10,20).
There is only one solution that satisfies the constraints, namely x = (10,20).
To solve the equations numerically, first enter code to calculate F(x).
function F = fbnd(x)
F(1) = (x(1)+1)*(10-x(1))*(1+x(2)^2)/(1+x(2)^2+x(2));
F(2) = (x(2)+2)*(20-x(2))*(1+x(1)^2)/(1+x(1)^2+x(1));
Save this code as the file fbnd.m
on your MATLAB® path.
Generally, a system of N equations in N variables
has isolated solutions, meaning each solution has no nearby neighbors
that are also solutions. So one way to search for a solution that
satisfies some constraints is to generate a number of initial points x0
,
and run fsolve
starting at each x0
.
For this example, to look for a solution to Equation 11-9, take 10 random points that are normally distributed with mean 0 and standard deviation 100.
rng default % for reproducibility N = 10; % try 10 random start points pts = 100*randn(N,2); % initial points are rows in pts soln = zeros(N,2); % allocate solution opts = optimoptions('fsolve','Display','off'); for k = 1:N soln(k,:) = fsolve(@fbnd,pts(k,:),opts); % find solutions end
Examine the solutions in soln
, and you find
several that satisfy the constraints.
There are three fsolve
algorithms. Each
can lead to different solutions.
For this example, take x0 = [1,9]
and
examine the solution each algorithm returns.
x0 = [1,9]; opts = optimoptions(@fsolve,'Display','off',... 'Algorithm','trust-region-dogleg'); x1 = fsolve(@fbnd,x0,opts)
x1 = -1.0000 -2.0000
opts.Algorithm = 'trust-region-reflective';
x2 = fsolve(@fbnd,x0,opts)
x2 = -1.0000 20.0000
opts.Algorithm = 'levenberg-marquardt';
x3 = fsolve(@fbnd,x0,opts)
x3 = 0.9523 8.9941
Here, all three algorithms find different solutions for the
same initial point. In fact, x3
is not even a solution,
but is simply a locally stationary point.
lsqnonlin
tries to minimize the sum of
squares of the components of a vector function F(x).
Therefore, it attempts to solve the equation F(x) = 0. Furthermore, lsqnonlin
accepts
bound constraints.
Formulate the example problem for lsqnonlin
and
solve it.
lb = [0,0];
rng default
x0 = 100*randn(2,1);
[x res] = lsqnonlin(@fbnd,x0,lb)
x = 10.0000 20.0000 res = 2.4783e-25
You can use lsqnonlin
with the Global Optimization Toolbox MultiStart
solver
to search over many initial points automatically. See MultiStart Using lsqcurvefit or lsqnonlin.
You can reformulate the problem and use fmincon
as
follows:
Give a constant objective function, such as @(x)0
,
which evaluates to 0
for each x
.
Set the fsolve
objective function
as the nonlinear equality constraints in fmincon
.
Give any other constraints in the usual fmincon
syntax.
For this example, write a function file for the nonlinear inequality constraint.
function [c,ceq] = fminconstr(x) c = []; % no nonlinear inequality ceq = fbnd(x); % the fsolve objective is fmincon constraints
Save this code as the file fminconstr.m
on
your MATLAB path.
Solve the constrained problem.
lb = [0,0]; % lower bound constraint rng default % reproducible initial point x0 = 100*randn(2,1); opts = optimoptions(@fmincon,'Algorithm','interior-point','Display','off'); x = fmincon(@(x)0,x0,[],[],[],[],lb,[],@fminconstr,opts)
x = 10.0000 20.0000