To minimize a large-scale quadratic with upper and lower bounds,
you can use the quadprog
function
with the 'trust-region-reflective'
algorithm.
The problem stored in the MAT-file qpbox1.mat
is
a positive definite quadratic, and the Hessian matrix H
is
tridiagonal, subject to upper (ub
) and lower (lb
)
bounds.
load qpbox1 % Get H lb = zeros(400,1); lb(400) = -inf; ub = 0.9*ones(400,1); ub(400) = inf; f = zeros(400,1); f([1 400]) = -2;
xstart = 0.5*ones(400,1); options = optimoptions('quadprog','Algorithm','trust-region-reflective'); [x,fval,exitflag,output] = ... quadprog(H,f,[],[],[],[],lb,ub,xstart,options);
Looking at the resulting values of exitflag
, output.iterations
,
and output.cgiterations
,
exitflag,output.iterations,output.cgiterations exitflag = 3 ans = 19 ans = 1696
You can see that while convergence occurred in 19 iterations,
the high number of CG iterations indicates that the cost of solving
the linear system is high. In light of this cost, try using a direct
solver at each iteration by setting the SubproblemAlgorithm
option
to 'factorization'
:
options = optimoptions(options,'SubproblemAlgorithm','factorization'); [x,fval,exitflag,output] = ... quadprog(H,f,[],[],[],[],lb,ub,xstart,options);
Now the number of iterations has dropped to 10:
exitflag,output.iterations,output.cgiterations exitflag = 3 ans = 10 ans = 0
Using a direct solver at each iteration usually causes the number
of iterations to decrease, but often takes more time per iteration.
For this problem, the tradeoff is beneficial, as the time for quadprog
to solve the problem decreases
by a factor of 10.
You can also use the default 'interior-point-convex'
algorithm
to solve this convex problem:
options = optimoptions('quadprog','Algorithm','interior-point-convex'); [x,fval,exitflag,output] = ... quadprog(H,f,[],[],[],[],lb,ub,[],options);
Check the exit flag and iterations (the interior-point algorithm does not use CG iterations):
exitflag,output.iterations exitflag = 1 ans = 8