Writing Objective Functions

Types of Objective Functions

Many Optimization Toolbox™ solvers minimize a scalar function of a multidimensional vector. The objective function is the function the solvers attempt to minimize. Several solvers accept vector-valued objective functions, and some solvers use objective functions you specify by vectors or matrices.

Objective TypeSolversHow to Write Objectives
Scalar

fmincon

fminunc

fminbnd

fminsearch

fseminf

fzero

Writing Scalar Objective Functions
Nonlinear least squares

lsqcurvefit

lsqnonlin

Writing Vector and Matrix Objective Functions
Multivariable equation solving

fsolve

Multiobjective

fgoalattain

fminimax

Linear programming

linprog

Writing Objective Functions for Linear or Quadratic Problems
Mixed-integer linear programming

intlinprog

Linear least squares

lsqlin

lsqnonneg

Quadratic programming

quadprog

Writing Scalar Objective Functions

Function Files

A scalar objective function file accepts one input, say x, and returns one scalar output, say f. The input x can be a scalar, vector, or matrix. A function file can return more outputs (see Including Derivatives).

For example, suppose your objective is a function of three variables, x, y, and z:

f(x) = 3*(xy)4 + 4*(x + z)2 / (1 + x2 + y2 + z2) + cosh(x – 1) + tanh(y + z).

  1. Write this function as a file that accepts the vector xin = [x;y;z] and returns f:

    function f = myObjective(xin)
    f = 3*(xin(1)-xin(2))^4 + 4*(xin(1)+xin(3))^2/(1+norm(xin)^2) ...
        + cosh(xin(1)-1) + tanh(xin(2)+xin(3));
  2. Save it as a file named myObjective.m to a folder on your MATLAB® path.

  3. Check that the function evaluates correctly:

    myObjective([1;2;3])
    
    ans =
        9.2666

For information on how to include extra parameters, see Passing Extra Parameters. For more complex examples of function files, see Minimization with Gradient and Hessian Sparsity Pattern or Minimization with Bound Constraints and Banded Preconditioner.

Local Functions and Nested Functions.  Functions can exist inside other files as local functions or nested functions. Using local functions or nested functions can lower the number of distinct files you save. Using nested functions also lets you access extra parameters, as shown in Nested Functions.

For example, suppose you want to minimize the myObjective.m objective function, described in Function Files, subject to the ellipseparabola.m constraint, described in Nonlinear Constraints. Instead of writing two files, myObjective.m and ellipseparabola.m, write one file that contains both functions as local functions:

function [x fval] = callObjConstr(x0,options)
% Using a local function for just one file

if nargin < 2
    options = optimoptions('fmincon','Algorithm','interior-point');
end

[x fval] = fmincon(@myObjective,x0,[],[],[],[],[],[], ...
    @ellipseparabola,options);

function f = myObjective(xin)
f = 3*(xin(1)-xin(2))^4 + 4*(xin(1)+xin(3))^2/(1+sum(xin.^2)) ...
    + cosh(xin(1)-1) + tanh(xin(2)+xin(3));

function [c,ceq] = ellipseparabola(x)
c(1) = (x(1)^2)/9 + (x(2)^2)/4 - 1;
c(2) = x(1)^2 - x(2) - 1;
ceq = [];

Solve the constrained minimization starting from the point [1;1;1]:

[x fval] = callObjConstr(ones(3,1))

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is 
non-decreasing in feasible directions, to within the default 
value of the function tolerance, and constraints are satisfied 
to within the default value of the constraint tolerance.

x =
    1.1835
    0.8345
   -1.6439

fval =
    0.5383

Anonymous Function Objectives

Use anonymous functions to write simple objective functions. For more information about anonymous functions, see What Are Anonymous Functions? in the MATLAB Programming Fundamentals documentation. Rosenbrock's function is simple enough to write as an anonymous function:

anonrosen = @(x)(100*(x(2) - x(1)^2)^2 + (1-x(1))^2);
Check that anonrosen evaluates correctly at [-1 2]:
anonrosen([-1 2])

ans =
   104
Minimizing anonrosen with fminunc yields the following results:
options = optimoptions(@fminunc,'Algorithm','quasi-newton');
[x fval] = fminunc(anonrosen,[-1;2],options)

Local minimum found.

Optimization completed because the size of the gradient
is less than the default value of the function tolerance.

x =
    1.0000
    1.0000

fval =
  1.2262e-10

Including Derivatives

For fmincon and fminunc, you can include gradients in the objective function. You can also include Hessians, depending on the algorithm. The Hessian matrix Hi,j(x) = ∂2f/∂xixj.

The following table shows which algorithms can use gradients and Hessians.

SolverAlgorithmGradientHessian
fminconactive-setOptionalNo
interior-pointOptionalOptional separate function (see Hessian)
sqpOptionalNo
trust-region-reflectiveRequiredOptional
fminunctrust-regionRequiredOptional
quasi-newtonOptionalNo

Benefits of Including Derivatives.  If you do not provide gradients, solvers estimate gradients via finite differences. If you provide gradients, your solver need not perform this finite difference estimation, so can save time and be more accurate. Furthermore, solvers use an approximate Hessian, which can be far from the true Hessian. Providing a Hessian can yield a solution in fewer iterations.

For constrained problems, providing a gradient has another advantage. A solver can reach a point x such that x is feasible, but, for this x, finite differences around x always lead to an infeasible point. Suppose further that the objective function at an infeasible point returns a complex output, Inf, NaN, or error. In this case, a solver can fail or halt prematurely. Providing a gradient allows a solver to proceed. To obtain this benefit, you might also need to include the gradient of a nonlinear constraint function, and set the GradConstr option to 'on'. See Nonlinear Constraints.

Choose Input Hessian for interior-point fmincon.  The fmincon interior-point algorithm has many options for selecting an input Hessian. For syntax details, see Hessian. Here are the options, along with estimates of their relative characteristics.

HessianRelative Memory UsageRelative Efficiency
'bfgs' (default)High (for large problems)High
'lbfgs'Low to ModerateModerate
'fin-diff-grads'LowModerate
'user-supplied' with 'HessMult'Low (can depend on your code)Moderate
'user-supplied' with 'HessFcn'? (depends on your code)High (depends on your code)

Use the default 'bfgs' Hessian unless you

The reason 'lbfgs' has only moderate efficiency is twofold. It has relatively expensive Sherman-Morrison updates. And the resulting iteration step can be somewhat inaccurate due to the 'lbfgs' limited memory.

The reason 'fin-diff-grads' and HessMult have only moderate efficiency is that they use a conjugate gradient approach. They accurately estimate the Hessian of the objective function, but they do not generate the most accurate iteration step. For more information, see fmincon Interior Point Algorithm, and its discussion of the LDL approach and the conjugate gradient approach to solving Equation 6-52.

How to Include Derivatives.  

  1. Write code that returns:

    • The objective function (scalar) as the first output

    • The gradient (vector) as the second output

    • Optionally, the Hessian (matrix) as the third output

  2. Set the GradObj option to 'on' with optimoptions.

  3. Optionally, set the Hessian option to 'on' or 'user-supplied'.

    For the fmincon interior-point solver, set the Hessian option to 'user-supplied' and set the 'HessFcn' option to @hessianfcn, where hessianfcn is a function that computes the Hessian of the Lagrangian. For details, see Hessian. For an example, see fmincon Interior-Point Algorithm with Analytic Hessian.

  4. Optionally, check if your gradient function matches a finite-difference approximation. See Checking Validity of Gradients or Jacobians.

    Tip   For most flexibility, write conditionalized code. Conditionalized means that the number of function outputs can vary, as shown in the following example. Conditionalized code does not error depending on the value of the GradObj or Hessian option. Unconditionalized code requires you to set these options appropriately.

For example, consider Rosenbrock's function

f(x)=100(x2x12)2+(1x1)2,

which is described and plotted in Solve a Constrained Nonlinear Problem. The gradient of f(x) is

f(x)=[400(x2x12)x12(1x1)200(x2x12)],

and the Hessian H(x) is

H(x)=[1200x12400x2+2400x1400x1200].

rosenthree is an unconditionalized function that returns the Rosenbrock function with its gradient and Hessian:

function [f g H] = rosenthree(x)
% Calculate objective f, gradient g, Hessian H
f = 100*(x(2) - x(1)^2)^2 + (1-x(1))^2;
g = [-400*(x(2)-x(1)^2)*x(1)-2*(1-x(1));
        200*(x(2)-x(1)^2)];
H = [1200*x(1)^2-400*x(2)+2, -400*x(1);
            -400*x(1), 200];

rosenboth is a conditionalized function that returns whatever the solver requires:

function [f g H] = rosenboth(x)
% Calculate objective f
f = 100*(x(2) - x(1)^2)^2 + (1-x(1))^2;

if nargout > 1 % gradient required
    g = [-400*(x(2)-x(1)^2)*x(1)-2*(1-x(1));
        200*(x(2)-x(1)^2)];
    
    if nargout > 2 % Hessian required
        H = [1200*x(1)^2-400*x(2)+2, -400*x(1);
            -400*x(1), 200];  
    end

end

nargout checks the number of arguments that a calling function specifies. See Find Number of Function Arguments in the MATLAB Programming Fundamentals documentation.

The fminunc solver, designed for unconstrained optimization, allows you to minimize Rosenbrock's function. Tell fminunc to use the gradient and Hessian by setting options:

options = optimoptions(@fminunc,'Algorithm','trust-region',...
    'GradObj','on','Hessian','on');

Run fminunc starting at [-1;2]:

[x fval] = fminunc(@rosenboth,[-1;2],options)
Local minimum found.

Optimization completed because the size of the gradient
is less than the default value of the function tolerance.

x =
    1.0000
    1.0000

fval =
  1.9310e-017

If you have a Symbolic Math Toolbox™ license, you can calculate gradients and Hessians automatically, as described in Symbolic Math Toolbox Calculates Gradients and Hessians.

Writing Vector and Matrix Objective Functions

Some solvers, such as fsolve and lsqcurvefit, have objective functions that are vectors or matrices. The main difference in usage between these types of objective functions and scalar objective functions is the way to write their derivatives. The first-order partial derivatives of a vector-valued or matrix-valued function is called a Jacobian; the first-order partial derivatives of a scalar function is called a gradient.

Jacobians of Vector Functions

If x is a vector of independent variables, and F(x) is a vector function, the Jacobian J(x) is

Jij(x)=Fi(x)xj.

If F has m components, and x has k components, J is an m-by-k matrix.

For example, if

F(x)=[x12+x2x3sin(x1+2x23x3)],

then J(x) is

J(x)=[2x1x3x2cos(x1+2x23x3)2cos(x1+2x23x3)3cos(x1+2x23x3)].

The function file associated with this example is:

function [F jacF] = vectorObjective(x)
F = [x(1)^2 + x(2)*x(3);
    sin(x(1) + 2*x(2) - 3*x(3))];
if nargout > 1 % need Jacobian
    jacF = [2*x(1),x(3),x(2);
        cos(x(1)+2*x(2)-3*x(3)),2*cos(x(1)+2*x(2)-3*x(3)), ...
        -3*cos(x(1)+2*x(2)-3*x(3))];
end

Jacobians of Matrix Functions

The Jacobian of a matrix F(x) is defined by changing the matrix to a vector, column by column. For example, rewrite the matrix

F=[F11F12F21F22F31F32]

as a vector f:

f=[F11F21F31F12F22F32].

The Jacobian of F is as the Jacobian of f,

Jij=fixj.

If F is an m-by-n matrix, and x is a k-vector, the Jacobian is an mn-by-k matrix.

For example, if

F(x)=[x1x2x13+3x225x2x14x2/x14x22x13x24],

then the Jacobian of F is

J(x)=[x2x14x13502x23x126x2x2/x121/x13x124x23].

Jacobians with Matrix-Valued Independent Variables

If x is a matrix, define the Jacobian of F(x) by changing the matrix x to a vector, column by column. For example, if

X=[x11x12x21x22],

then the gradient is defined in terms of the vector

x=[x11x21x12x22].

With

F=[F11F12F21F22F31F32],

and with f the vector form of F as above, the Jacobian of F(X) is defined as the Jacobian of f(x):

Jij=fixj.

So, for example,

J(3,2)=f(3)x(2)=F31X21, and J(5,4)=f(5)x(4)=F22X22.

If F is an m-by-n matrix and x is a j-by-k matrix, then the Jacobian is an mn-by-jk matrix.

Writing Objective Functions for Linear or Quadratic Problems

The following solvers handle linear or quadratic objective functions:

Maximizing an Objective

All solvers attempt to minimize an objective function. If you have a maximization problem, that is, a problem of the form

maxxf(x),

then define g(x) = –f(x), and minimize g.

For example, to find the maximum of tan(cos(x)) near x = 5, evaluate:

[x fval] = fminunc(@(x)-tan(cos(x)),5)
Local minimum found.

Optimization completed because the size of the gradient is less than
the default value of the function tolerance.

x =
    6.2832

fval =
   -1.5574
The maximum is 1.5574 (the negative of the reported fval), and occurs at x = 6.2832. This answer is correct since, to five digits, the maximum is tan(1) = 1.5574, which occurs at x = 2π = 6.2832.

Was this topic helpful?