Solve constrained linear least-squares problems
Linear least-squares solver with bounds or linear constraints.
Solves least-squares curve fitting problems of the form
finds
the minimum for x
= lsqlin(problem
)problem
, where problem
is
a structure. Create the problem
structure by exporting
a problem from Optimization app, as described in Exporting Your Work.
[
, for any input arguments described
above, returns:x
,resnorm
,residual
,exitflag
,output
,lambda
]
= lsqlin(___)
The squared 2-norm of the residual resnorm =
The residual residual = C*x - d
A value exitflag
describing the
exit condition
A structure output
containing information
about the optimization process
A structure lambda
containing the
Lagrange multipliers
The factor ½ in the definition of the problem affects the
values in the lambda
structure.
Find the x
that minimizes
the norm of C*x - d
for an overdetermined problem
with linear inequality constraints.
Specify the problem and constraints.
C = [0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [0.0578 0.3528 0.8131 0.0098 0.1388]; A = [0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b = [0.5251 0.2026 0.6721];
Call lsqlin
to solve the problem.
x = lsqlin(C,d,A,b)
Warning: The trust-region-reflective algorithm can handle bound constraints only; using active-set algorithm instead. Optimization terminated. x = 0.1299 -0.5757 0.4251 0.2438
Find the x
that minimizes
the norm of C*x - d
for an overdetermined problem
with linear inequality constraints and bounds.
Specify the problem and constraints.
C = [0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [0.0578 0.3528 0.8131 0.0098 0.1388]; A =[0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b =[0.5251 0.2026 0.6721]; Aeq = [3 5 7 9]; beq = 4; lb = -0.1*ones(4,1); ub = 2*ones(4,1);
Call lsqlin
to solve the problem.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)
Warning: The trust-region-reflective algorithm can handle bound constraints only; using active-set algorithm instead. Optimization terminated. x = -0.1000 -0.1000 0.1599 0.4090
Find the x
that minimizes the norm of C*x - d
for an overdetermined problem with linear inequality constraints. Use a start point and nondefault options.
Specify the problem and constraints.
C = [0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [0.0578 0.3528 0.8131 0.0098 0.1388]; A =[0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b =[0.5251 0.2026 0.6721]; Aeq = []; beq = []; lb = []; ub = [];
Set a start point.
x0 = 0.1*ones(4,1);
Set options to choose the 'active-set'
algorithm, which is the only algorithm that uses a start point.
options = optimoptions('lsqlin','Algorithm','active-set');
Call lsqlin
to solve the problem.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options)
Optimization terminated. x = 0.1299 -0.5757 0.4251 0.2438
Obtain and interpret all lsqlin
outputs.
Define a problem with linear inequality constraints and bounds. The problem is overdetermined because there are four columns in the C
matrix but five rows. This means the problem has four unknowns and five conditions, even before including the linear constraints and bounds.
C = [0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [0.0578 0.3528 0.8131 0.0098 0.1388]; A =[0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b =[0.5251 0.2026 0.6721]; lb = -0.1*ones(4,1); ub = 2*ones(4,1);
Set options to use the 'interior-point'
algorithm.
options = optimoptions('lsqlin','Algorithm','interior-point');
The 'interior-point'
algorithm does not use an initial point, so set x0
to []
.
x0 = [];
Call lsqlin
with all outputs.
[x,resnorm,residual,exitflag,output,lambda] = ...
lsqlin(C,d,A,b,[],[],lb,ub,x0,options)
Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the default value of the function tolerance, and constraints are satisfied to within the default value of the constraint tolerance. x = -0.1000 -0.1000 0.2152 0.3502 resnorm = 0.1672 residual = 0.0455 0.0764 -0.3562 0.1620 0.0784 exitflag = 1 output = message: 'Minimum found that satisfies the constraints....' algorithm: 'interior-point' firstorderopt: 1.6296e-08 constrviolation: 0 iterations: 6 cgiterations: [] lambda = ineqlin: [3x1 double] eqlin: [0x1 double] lower: [4x1 double] upper: [4x1 double]
Examine the nonzero Lagrange multiplier fields in more detail. First examine the Lagrange multipliers for the linear inequality constraint.
lambda.ineqlin
ans = 0.0000 0.2392 0.0000
Lagrange multipliers are nonzero exactly when the solution is on the corresponding constraint boundary. In other words, Lagrange multipliers are nonzero when the corresponding constraint is active. lambda.ineqlin(2)
is nonzero. This means that the second element in A*x
should equal the second element in b
, because the constraint is active.
[A(2,:)*x,b(2)]
ans = 0.2026 0.2026
Now examine the Lagrange multipliers for the lower and upper bound constraints.
lambda.lower
ans = 0.0409 0.2784 0.0000 0.0000
lambda.upper
ans = 1.0e-10 * 0.4665 0.4751 0.5537 0.6247
The first two elements of lambda.lower
are nonzero. You see that x(1)
and x(2)
are at their lower bounds, -0.1
. All elements of lambda.upper
are essentially zero, and you see that all components of x
are less than their upper bound, 2
.
C
— Multiplier matrixreal matrixMultiplier matrix, specified as a matrix of doubles. C
represents
the multiplier of the solution x
in the expression C*x
- d
. C
is M
-by-N
,
where M
is the number of equations, and N
is
the number of elements of x
.
Example: C = [1,4;2,5;7,8]
Data Types: double
d
— Constant vectorreal vectorConstant vector, specified as a vector of doubles. d
represents
the additive constant term in the expression C*x - d
. d
is M
-by-1
,
where M
is the number of equations.
Example: d = [5;0;-12]
Data Types: double
A
— Linear inequality constraint matrixreal matrixLinear inequality constraint matrix, specified as a matrix of
doubles. A
represents the linear coefficients in
the constraints A*x
≤ b
. A
has size Mineq
-by-N
,
where Mineq
is the number of constraints and N
is
the number of elements of x
. To save memory, pass A
as
a sparse matrix.
Example: A = [4,3;2,0;4,-1];
means three linear
inequalities (three rows) for two decision variables (two columns).
Data Types: double
b
— Linear inequality constraint vectorreal vectorLinear inequality constraint vector, specified as a vector of
doubles. b
represents the constant vector in the
constraints A*x
≤ b
. b
has length Mineq
,
where A
is Mineq
-by-N
.
Example: [4,0]
Data Types: double
Aeq
— Linear equality constraint matrix[]
(default) | real matrixLinear equality constraint matrix, specified as a matrix of
doubles. Aeq
represents the linear coefficients
in the constraints Aeq*x
= beq
. Aeq
has size Meq
-by-N
,
where Meq
is the number of constraints and N
is
the number of elements of x
. To save memory, pass Aeq
as
a sparse matrix.
Example: A = [4,3;2,0;4,-1];
means three linear
inequalities (three rows) for two decision variables (two columns).
Data Types: double
beq
— Linear equality constraint vector[]
(default) | real vectorLinear equality constraint vector, specified as a vector of
doubles. beq
represents the constant vector in
the constraints Aeq*x
= beq
. beq
has length Meq
,
where Aeq
is Meq
-by-N
.
Example: [4,0]
Data Types: double
lb
— Lower bounds[]
(default) | real vector or arrayLower bounds, specified as a vector or array of doubles. lb
represents
the lower bounds elementwise in lb
≤ x
≤ ub
.
Internally, lsqlin
converts an array lb
to
the vector lb(:)
.
Example: lb = [0;-Inf;4]
means x(1)
≥ 0
, x(3) ≥ 4
.
Data Types: double
ub
— Upper bounds[]
(default) | real vector or arrayUpper bounds, specified as a vector or array of doubles. ub
represents
the upper bounds elementwise in lb
≤ x
≤ ub
.
Internally, lsqlin
converts an array ub
to
the vector ub(:)
.
Example: ub = [Inf;4;10]
means x(2)
≤ 4
, x(3) ≤ 10
.
Data Types: double
x0
— Initial point[]
(default) | real vector or arrayInitial point for the solution process, specified as a vector
or array of doubles. x0
is used only by the 'active-set'
algorithm.
Optional.
If you do not provide an x0
for the 'active-set'
algorithm, lsqlin
sets x0
to
the zero vector. If any component of this zero vector x0
violates
the bounds, lsqlin
sets x0
to
a point in the interior of the box defined by the bounds.
Example: x0 = [4;-3]
Data Types: double
options
— Options for lsqlin
options created using optimoptions
or the Optimization appOptions for lsqlin
, specified as the output
of the optimoptions
function
or the Optimization app.
All Algorithms
| Choose the algorithm:
The The For more information on choosing the algorithm, see Choosing the Algorithm. |
Diagnostics | Display diagnostic information about the function to
be minimized or solved. The choices are |
Display | Level of display returned to the command line.
The
|
Use | Use the |
MaxIter | Maximum number of iterations allowed, a positive integer.
The default value is |
trust-region-reflective
Algorithm Options
| Function
handle for the Jacobian multiply function. For large-scale structured
problems, this function should compute the Jacobian matrix product W = jmfun(Jinfo,Y,flag) where
In each case, See Jacobian Multiply Function with Linear Least Squares for an example. | |
| Maximum number of PCG (preconditioned
conjugate gradient) iterations, a positive scalar. The default is | |
| Upper bandwidth of preconditioner
for PCG (preconditioned conjugate gradient). By default, diagonal
preconditioning is used (upper bandwidth of 0). For some problems,
increasing the bandwidth reduces the number of PCG iterations. Setting | |
TolFun | Termination tolerance on the function
value, a positive scalar. The default is | |
TolPCG | Termination tolerance on the PCG
(preconditioned conjugate gradient) iteration, a positive scalar.
The default is | |
TypicalX | Typical |
interior-point
Algorithm Options
TolCon | Tolerance on the constraint violation, a positive
scalar. The default is |
TolFun | Termination tolerance on the function
value, a positive scalar. The default is |
problem
— Optimization problemstructureOptimization problem, specified as a structure with the following fields.
| Matrix multiplier in the term C*x
- d |
| Additive constant in the term C*x
- d |
| Matrix for linear inequality constraints |
| Vector for linear inequality constraints |
| Matrix for linear equality constraints |
| Vector for linear equality constraints |
lb | Vector of lower bounds |
ub | Vector of upper bounds |
| Initial point for x |
| 'lsqlin' |
| Options created with optimoptions |
Create the problem
structure by exporting
a problem from the Optimization app, as described in Exporting Your Work.
Data Types: struct
x
— Solutionreal vectorSolution, returned as a vector that minimizes the norm of C*x-d
subject
to all bounds and linear constraints.
resnorm
— Objective valuereal scalarObjective value, returned as the scalar value norm(C*x-d)^2
.
residual
— Solution residualsreal vectorSolution residuals, returned as the vector C*x-d
.
exitflag
— Algorithm stopping conditionintegerAlgorithm stopping condition, returned as an integer identifying
the reason the algorithm stopped. The following lists the values of exitflag
and
the corresponding reasons lsqlin
stopped.
| Function converged to a solution |
| Change in the residual was smaller than the specified tolerance. |
| Number of iterations exceeded |
| The problem is infeasible. |
| Ill-conditioning prevents further optimization. |
| Magnitude of search direction became too small. No further progress could be made. |
The exit message for the interior-point
algorithm
can give more details on the reason lsqlin
stopped,
such as exceeding a tolerance. See Exit Flags and Exit Messages.
output
— Solution process summarystructureSolution process summary, returned as a structure containing information about the optimization process.
| Number of iterations the solver took. |
| One of these algorithms:
|
| Constraint violation that is positive
for violated constraints (not returned for the
|
| Exit message. |
| First-order optimality at the solution. See First-Order Optimality Measure. |
| Number of conjugate gradient iterations
the solver performed. Nonempty only for the |
See Output Structures.
lambda
— Lagrange multipliersstructureLagrange multipliers, returned as a structure with the following fields.
| Lower bounds |
| Upper bounds |
| Linear inequalities |
| Linear equalities |
For problems with no constraints, you can use \
(matrix
left division). When you have no constraints, lsqlin
returns x = C\d
.
Because the problem being solved is always convex, lsqlin
finds
a global, although not necessarily unique, solution.
Better numerical results are likely if you specify
equalities explicitly, using Aeq
and beq
,
instead of implicitly, using lb
and ub
.
The trust-region-reflective
algorithm
does not allow equal upper and lower bounds. Use another algorithm
for this case.
If the specified input bounds for a problem are inconsistent,
the output x
is x0
and the outputs resnorm
and residual
are []
.
You can solve some large structured problems, including
those where the C
matrix is too large to fit in
memory, using the trust-region-reflective
algorithm
with a Jacobian multiply function. For information, see trust-region-reflective
Algorithm Options.
When the problem given to lsqlin
has only upper
and lower bounds; that is, no linear inequalities or equalities are
specified, and the matrix C
has at least as many
rows as columns, the default algorithm is trust-region-reflective
.
This method is a subspace trust-region method based on the interior-reflective
Newton method described in [1].
Each iteration involves the approximate solution of a large linear
system using the method of preconditioned conjugate gradients (PCG).
See Trust-Region-Reflective Least Squares,
and in particular Large Scale Linear Least Squares.
lsqlin
uses the active-set
algorithm
when you specify it with optimoptions
, or when
you give linear inequalities or equalities. The algorithm is based
on quadprog
, which uses an active
set method similar to that described in [2]. It finds an initial feasible solution by first solving
a linear programming problem. See active-set
quadprog
Algorithm.
The 'interior-point'
algorithm is based on
the quadprog
'interior-point-convex'
algorithm.
See Interior-Point Linear Least Squares.
[1] Coleman, T. F. and Y. Li. "A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables," SIAM Journal on Optimization, Vol. 6, Number 4, pp. 1040–1058, 1996.
[2] Gill, P. E., W. Murray, and M. H. Wright. Practical Optimization, Academic Press, London, UK, 1981.