Step 2: Write a function to compute Hessian-matrix products for H given a matrix Y.
Step 3: Call a nonlinear minimization routine with a starting point and linear equality constraints.
The fmincon
interior-point
and trust-region-reflective
algorithms,
and the fminunc
trust-region
algorithm
can solve problems where the Hessian is dense but structured. For
these problems, fmincon
and fminunc
do
not compute H*Y with the Hessian H directly,
because forming H would be memory-intensive. Instead,
you must provide fmincon
or fminunc
with
a function that, given a matrix Y and information
about H, computes W = H*Y.
In this example, the objective function is nonlinear and linear
equalities exist so fmincon
is used. The description
applies to the trust-region reflective algorithm; the fminunc
trust-region
algorithm
is similar. For the interior-point algorithm, see the HessianMultiplyFcn
option
in Hessian Multiply Function. The
objective function has the structure
where V is a 1000-by-2 matrix. The Hessian
of f is dense, but the Hessian of is sparse. If the Hessian of is , then H,
the Hessian of f, is
To avoid excessive memory usage that could happen by working
with H directly, the example provides a Hessian
multiply function, hmfleq1
. This function, when
passed a matrix Y
, uses sparse matrices Hinfo
,
which corresponds to , and
V
to compute the Hessian
matrix product
W = H*Y = (Hinfo - V*V')*Y
In this example, the Hessian multiply function needs and
V
to
compute the Hessian matrix product. V
is a constant,
so you can capture V
in a function handle to an
anonymous function.
However, is not a constant and must be computed at the
current
x
. You can do this by computing in the objective
function and returning
as
Hinfo
in the third output
argument. By using optimoptions
to set the 'Hessian'
options
to 'on'
, fmincon
knows to get
the Hinfo
value from the objective function and
pass it to the Hessian multiply function hmfleq1
.
The example passes brownvv
to fmincon
as
the objective function. The brownvv.m
file
is long and is not included here. You can view the code with the command
type brownvv
Because brownvv
computes the gradient as
well as the objective function, the example (Step
3) uses optimoptions
to
set the SpecifyObjectiveGradient
option to true
.
Now, define a function hmfleq1
that uses Hinfo
,
which is computed in brownvv
, and V
,
which you can capture in a function handle to an anonymous function,
to compute the Hessian matrix product W
where W = H*Y = (Hinfo - V*V')*Y
. This function must
have the form
W = hmfleq1(Hinfo,Y)
The first argument must be the same as the third argument returned
by the objective function brownvv
. The second argument
to the Hessian multiply function is the matrix Y
(of W
= H*Y
).
Because fmincon
expects the second argument Y
to
be used to form the Hessian matrix product, Y
is
always a matrix with n
rows where n
is
the number of dimensions in the problem. The number of columns in Y
can
vary. Finally, you can use a function handle to an anonymous function
to capture V, so V can be the third argument to 'hmfleqq'
.
function W = hmfleq1(Hinfo,Y,V); %HMFLEQ1 Hessian-matrix product function for BROWNVV objective. % W = hmfleq1(Hinfo,Y,V) computes W = (Hinfo-V*V')*Y % where Hinfo is a sparse matrix computed by BROWNVV % and V is a 2 column matrix. W = Hinfo*Y - V*(V'*Y);
The function hmfleq1
is available in the optimdemos
folder
as hmfleq1.m
.
Load the problem parameter, V
, and the sparse
equality constraint matrices, Aeq
and beq
,
from fleq1.mat
, which is available in the optimdemos
folder.
Use optimoptions
to set the SpecifyObjectiveGradient
option
to true
and to set the HessianMultiplyFcn
option
to a function handle that points to hmfleq1
. Call fmincon
with
objective function brownvv
and with V
as
an additional parameter:
function [fval,exitflag,output,x] = runfleq1 % RUNFLEQ1 demonstrates 'HessMult' option for FMINCON with linear % equalities. problem = load('fleq1'); % Get V, Aeq, beq V = problem.V; Aeq = problem.Aeq; beq = problem.beq; n = 1000; % problem dimension xstart = -ones(n,1); xstart(2:2:n,1) = ones(length(2:2:n),1); % starting point options = optimoptions(@fmincon,... 'Algorithm','trust-region-reflective',... 'SpecifyObjectiveGradient',true, ... 'HessianMultiplyFcn',@(Hinfo,Y)hmfleq1(Hinfo,Y,V),... 'Display','iter',... 'OptimalityTolerance',1e-9,... 'FunctionTolerance',1e-9); [x,fval,exitflag,output] = fmincon(@(x)brownvv(x,V),xstart,[],[],Aeq,beq,[],[], ... [],options);
To run the preceding code, enter
[fval,exitflag,output,x] = runfleq1;
Because the iterative display was set using optimoptions
,
this command generates the following iterative display:
Norm of First-order Iteration f(x) step optimality CG-iterations 0 2297.63 1.41e+03 1 1081.92 7.03821 555 2 2 474.935 8.17769 223 2 3 116.236 11.3752 72.7 2 4 44.5572 7.44815 23.5 2 5 44.5572 100 23.5 3 6 44.5572 25 23.5 0 7 12.5815 6.25 40 0 8 -94.8062 6.25 41 1 9 -510.124 12.5 318 1 10 -510.124 12.5 318 3 11 -571.121 3.125 66.6 0 12 -581.933 0.78125 99 4 13 -621.295 1.5625 75.6 5 14 -746.16 3.125 102 3 15 -802.344 5.15367 33.5 3 16 -822.362 1.39916 5.37 2 17 -823.207 0.353415 1.06 2 18 -823.245 0.124695 0.287 3 19 -823.246 0.0278802 0.109 5 20 -823.246 0.00337474 0.00122 7 21 -823.246 0.000131293 7.67e-05 6 Local minimum possible. fmincon stopped because the final change in function value relative to its initial value is less than the selected value of the function tolerance.
Convergence is rapid for a problem of this size with the PCG iteration cost increasing modestly as the optimization progresses. Feasibility of the equality constraints is maintained at the solution.
problem = load('fleq1'); % Get V, Aeq, beq V = problem.V; Aeq = problem.Aeq; beq = problem.beq; norm(Aeq*x-beq,inf) ans = 3.0198e-14
In this example, fmincon
cannot use H
to
compute a preconditioner because H
only exists
implicitly. Instead of H
, fmincon
uses Hinfo
,
the third argument returned by brownvv
, to compute
a preconditioner. Hinfo
is a good choice because
it is the same size as H
and approximates H
to
some degree. If Hinfo
were not the same size as H
, fmincon
would
compute a preconditioner based on some diagonal scaling matrices determined
from the algorithm. Typically, this would not perform as well.