fminimax

Solve minimax constraint problem

Equation

Finds the minimum of a problem specified by

minxmaxiFi(x)  such that  {c(x)0ceq(x)=0AxbAeqx=beqlbxub

where b and beq are vectors, A and Aeq are matrices, and c(x), ceq(x), and F(x) are functions that return vectors. F(x), c(x), and ceq(x) can be nonlinear functions.

x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.

You can also solve max-min problems with fminimax, using the identity

maxxminiFi(x)=minxmaxi(Fi(x)).

You can solve problems of the form

minxmaxi|Fi(x)|

by using the MinAbsMax option; see Notes.

Syntax

x = fminimax(fun,x0)
x = fminimax(fun,x0,A,b)
x = fminimax(fun,x0,A,b,Aeq,beq)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
x = fminimax(problem)
[x,fval] = fminimax(...)
[x,fval,maxfval] = fminimax(...)
[x,fval,maxfval,exitflag] = fminimax(...)
[x,fval,maxfval,exitflag,output] = fminimax(...)
[x,fval,maxfval,exitflag,output,lambda] = fminimax(...)

Description

fminimax minimizes the worst-case (largest) value of a set of multivariable functions, starting at an initial estimate. This is generally referred to as the minimax problem.

    Note:   Passing Extra Parameters explains how to pass extra parameters to the objective functions and nonlinear constraint functions, if necessary.

x = fminimax(fun,x0) starts at x0 and finds a minimax solution x to the functions described in fun.

x = fminimax(fun,x0,A,b) solves the minimax problem subject to the linear inequalities A*x ≤ b.

x = fminimax(fun,x0,A,b,Aeq,beq) solves the minimax problem subject to the linear equalities Aeq*x = beq as well. Set A = [] and b = [] if no inequalities exist.

x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub) defines a set of lower and upper bounds on the design variables in x, so that the solution is always in the range lb  x  ub.

x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon) subjects the minimax problem to the nonlinear inequalities c(x) or equality constraints ceq(x) defined in nonlcon. fminimax optimizes such that c(x) ≤ 0 and ceq(x) = 0. Set lb = [] and/or ub = [] if no bounds exist.

x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options) minimizes with the optimization options specified in options. Use optimoptions to set these options.

x = fminimax(problem) finds the minimum for problem, where problem is a structure described in Input Arguments.

Create the problem structure by exporting a problem from Optimization app, as described in Exporting Your Work.

[x,fval] = fminimax(...) returns the value of the objective function fun at the solution x.

[x,fval,maxfval] = fminimax(...) returns the maximum of the objective functions in the input fun evaluated at the solution x.

[x,fval,maxfval,exitflag] = fminimax(...) returns a value exitflag that describes the exit condition of fminimax.

[x,fval,maxfval,exitflag,output] = fminimax(...) returns a structure output with information about the optimization.

[x,fval,maxfval,exitflag,output,lambda] = fminimax(...) returns a structure lambda whose fields contain the Lagrange multipliers at the solution x.

    Note:   If the specified input bounds for a problem are inconsistent, the output x is x0 and the output fval is [].

Input Arguments

Function Arguments contains general descriptions of arguments passed into fminimax. This section provides function-specific details for fun, nonlcon, and problem:

fun

The function to be minimized. fun is a function that accepts a vector x and returns a vector F, the objective functions evaluated at x. The function fun can be specified as a function handle for a file:

x = fminimax(@myfun,x0)

where myfun is a MATLAB® function such as

function F = myfun(x)
F = ...            % Compute function values at x

fun can also be a function handle for an anonymous function.

x = fminimax(@(x)sin(x.*x),x0);

If the user-defined values for x and F are matrices, they are converted to a vector using linear indexing.

To minimize the worst case absolute values of any of the elements of the vector F(x) (i.e., min{max abs{F(x)} } ), partition those objectives into the first elements of F and use optimoptions to set the MinAbsMax option to be the number of such objectives.

If the gradient of the objective function can also be computed and the GradObj option is 'on', as set by

options = optimoptions('fminimax','GradObj','on')

then the function fun must return, in the second output argument, the gradient value G, a matrix, at x. The gradient consists of the partial derivative dF/dx of each F at the point x. If F is a vector of length m and x has length n, where n is the length of x0, then the gradient G of F(x) is an n-by-m matrix where G(i,j) is the partial derivative of F(j) with respect to x(i) (i.e., the jth column of G is the gradient of the jth objective function F(j)).

By checking the value of nargout, the function can avoid computing G when myfun is called with only one output argument (in the case where the optimization algorithm only needs the value of F but not G).

function [F,G] = myfun(x)
F = ...          % Compute the function values at x
if nargout > 1   % Two output arguments
   G = ...       % Gradients evaluated at x
end

    Note:   Setting GradObj to 'on' is effective only when there is no nonlinear constraint, or when the nonlinear constraint has GradConstr set to 'on' as well. This is because internally the objective is folded into the constraints, so the solver needs both gradients (objective and constraint) supplied in order to avoid estimating a gradient.

nonlcon

The function that computes the nonlinear inequality constraints c(x) ≤ 0 and nonlinear equality constraints ceq(x) = 0. The function nonlcon accepts a vector x and returns two vectors c and ceq. The vector c contains the nonlinear inequalities evaluated at x, and ceq contains the nonlinear equalities evaluated at x. The function nonlcon can be specified as a function handle.

x = fminimax(@myfun,x0,A,b,Aeq,beq,lb,ub,@mycon)

where mycon is a MATLAB function such as

function [c,ceq] = mycon(x)
c = ...     % Compute nonlinear inequalities at x
ceq = ...   % Compute nonlinear equalities at x

If the gradients of the constraints can also be computed and the GradConstr option is 'on', as set by

options = optimoptions('fminimax','GradConstr','on')

then the function nonlcon must also return, in the third and fourth output arguments, GC, the gradient of c(x), and GCeq, the gradient of ceq(x). Nonlinear Constraints explains how to "conditionalize" the gradients for use in solvers that do not accept supplied gradients, and explains the syntax of gradients.

    Note:   Setting GradConstr to 'on' is effective only when GradObj is set to 'on' as well. This is because internally the objective is folded into the constraint, so the solver needs both gradients (objective and constraint) supplied in order to avoid estimating a gradient.

    Note   Because Optimization Toolbox™ functions only accept inputs of type double, user-supplied objective and nonlinear constraint functions must return outputs of type double.

 
 
problem

objective

Objective function 

x0

Initial point for x 

Aineq

Matrix for linear inequality constraints 

bineq

Vector for linear inequality constraints 

Aeq

Matrix for linear equality constraints 

beq

Vector for linear equality constraints 

lb

Vector of lower bounds 

ub

Vector of upper bounds 

nonlcon

Nonlinear constraint function 

solver

'fminimax' 

options

Options created with optimoptions 

Output Arguments

Function Arguments contains general descriptions of arguments returned by fminimax. This section provides function-specific details for exitflag, lambda, maxfval, and output:

exitflag

Integer identifying the reason the algorithm terminated. The following lists the values of exitflag and the corresponding reasons the algorithm terminated:

1

Function converged to a solution x.

4

Magnitude of the search direction less than the specified tolerance and constraint violation less than options.TolCon.

5

Magnitude of directional derivative less than the specified tolerance and constraint violation less than options.TolCon.

0

Number of iterations exceeded options.MaxIter or number of function evaluations exceeded options.MaxFunEvals.

-1

Algorithm was terminated by the output function.

-2

No feasible point was found.

lambda

Structure containing the Lagrange multipliers at the solution x (separated by constraint type). The fields of the structure are

lower

Lower bounds lb

upper

Upper bounds ub

ineqlin

Linear inequalities

eqlin

Linear equalities

ineqnonlin

Nonlinear inequalities

eqnonlin

Nonlinear equalities

maxfval

Maximum of the function values evaluated at the solution x, that is, maxfval = max{fun(x)}.

output

Structure containing information about the optimization. The fields of the structure are

iterations

Number of iterations taken.

funcCount

Number of function evaluations.

lssteplength

Size of line search step relative to search direction

stepsize

Final displacement in x

algorithm

Optimization algorithm used.

firstorderopt

Measure of first-order optimality

constrviolation

Maximum of constraint functions

message

Exit message

Options

Optimization options used by fminimax. Use optimoptions to set or change options. See Optimization Options Reference for detailed information.

Diagnostics

Display diagnostic information about the function to be minimized or solved. The choices are 'on' or the default, 'off'.

DiffMaxChange

Maximum change in variables for finite-difference gradients (a positive scalar). The default is Inf.

DiffMinChange

Minimum change in variables for finite-difference gradients (a positive scalar). The default is 0.

Display

Level of display (see Iterative Display):

  • 'off' or 'none' displays no output.

  • 'iter' displays output at each iteration, and gives the default exit message.

  • 'iter-detailed' displays output at each iteration, and gives the technical exit message.

  • 'notify' displays output only if the function does not converge, and gives the default exit message.

  • 'notify-detailed' displays output only if the function does not converge, and gives the technical exit message.

  • 'final' (default) displays just the final output, and gives the default exit message.

  • 'final-detailed' displays just the final output, and gives the technical exit message.

FinDiffRelStep

Scalar or vector step size factor for finite differences. When you set FinDiffRelStep to a vector v, forward finite differences steps delta are

delta = v.*sign′(x).*max(abs(x),TypicalX);

where sign′(x) = sign(x) except sign′(0) = 1. Central finite differences are

delta = v.*max(abs(x),TypicalX);

Scalar FinDiffRelStep expands to a vector. The default is sqrt(eps) for forward finite differences, and eps^(1/3) for central finite differences.

FinDiffType

Finite differences, used to estimate gradients, are either 'forward' (the default), or 'central' (centered). 'central' takes twice as many function evaluations, but should be more accurate.

The algorithm is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds.

FunValCheck

Check whether objective function and constraints values are valid. 'on' displays an error when the objective function or constraints return a value that is complex, Inf, or NaN. The default 'off' displays no error.

GradConstr

Gradient for the user-defined constraints. When set to 'on', fminimax expects the constraint function to have four outputs, as described in nonlcon in Input Arguments. When set to the default 'off', fminimax estimates gradients of the nonlinear constraints by finite differences.

GradObj

Gradient for the user-defined objective function. See the preceding description of fun to see how to define the gradient in fun. Set to 'on' to have fminimax use a user-defined gradient of the objective function. The default 'off' causes fminimax to estimate gradients using finite differences.

MaxFunEvals

Maximum number of function evaluations allowed, a positive integer. The default value is 100*numberOfVariables. See Tolerances and Stopping Criteria and Iterations and Function Counts.

MaxIter

Maximum number of iterations allowed, a positive integer. The default value is 400. See Tolerances and Stopping Criteria and Iterations and Function Counts.

MaxSQPIter

Maximum number of SQP iterations allowed, a positive integer. The default is 10*max(numberOfVariables, numberOfInequalities + numberOfBounds).

MeritFunction

Use the goal attainment/minimax merit function if set to 'multiobj' (default). Use the fmincon merit function if set to 'singleobj'.

MinAbsMax

Number of elements of Fi(x) to minimize the maximum absolute value of Fi. See Notes. The default is 0.

OutputFcn

Specify one or more user-defined functions that an optimization function calls at each iteration, either as a function handle or as a cell array of function handles. The default is none ([]). See Output Function.

PlotFcns

Plots various measures of progress while the algorithm executes, select from predefined plots or write your own. Pass a function handle or a cell array of function handles. The default is none ([]).

  • @optimplotx plots the current point.

  • @optimplotfunccount plots the function count.

  • @optimplotfval plots the function value.

  • @optimplotconstrviolation plots the maximum constraint violation.

  • @optimplotstepsize plots the step size.

For information on writing a custom plot function, see Plot Functions.

RelLineSrchBnd

Relative bound (a real nonnegative scalar value) on the line search step length such that the total displacement in x satisfies |Δx(i)| ≤ relLineSrchBnd· max(|x(i)|,|typicalx(i)|). This option provides control over the magnitude of the displacements in x for cases in which the solver takes steps that it considers too large. The default is no bounds ([]).

RelLineSrchBndDuration

Number of iterations for which the bound specified in RelLineSrchBnd should be active (default is 1).

TolCon

Termination tolerance on the constraint violation, a positive scalar. The default is 1e-6. See Tolerances and Stopping Criteria.

TolConSQP

Termination tolerance on inner iteration SQP constraint violation, a positive scalar. The default is 1e-6.

TolFun

Termination tolerance on the function value, a positive scalar. The default is 1e-6. See Tolerances and Stopping Criteria.

TolX

Termination tolerance on x, a positive scalar. The default value is 1e-6. See Tolerances and Stopping Criteria.

TypicalX

Typical x values. The number of elements in TypicalX is equal to the number of elements in x0, the starting point. The default value is ones(numberofvariables,1). fminimax uses TypicalX for scaling finite differences for gradient estimation.

UseParallel

When true, estimate gradients in parallel. Disable by setting to the default false. See Parallel Computing.

Examples

Find values of x that minimize the maximum value of

[f1(x), f2(x), f3(x), f4(x), f5(x)]

where

f1(x)=2x12+x2248x140x2+304,f2(x)=x123x22,f3(x)=x1+3x218,f4(x)=x1x2,f5(x)=x1+x28.

First, write a file that computes the five functions at x.

function f = myfun(x)
f(1)= 2*x(1)^2+x(2)^2-48*x(1)-40*x(2)+304;     % Objectives
f(2)= -x(1)^2 - 3*x(2)^2;
f(3)= x(1) + 3*x(2) -18;
f(4)= -x(1)- x(2);
f(5)= x(1) + x(2) - 8;

Next, invoke an optimization routine.

x0 = [0.1; 0.1];       % Make a starting guess at solution
[x,fval] = fminimax(@myfun,x0);

After seven iterations, the solution is

x,fval

x = 
     4.0000
     4.0000
fval =
     0.0000  -64.0000  -2.0000  -8.0000  -0.0000

Notes

You can solve problems of the form

minxmaxiGi(x),

where

Gi(x)={|Fi(x)|1imFi(x)i>m.

Here m is the value of the MinAbsMax option. The advantage of this formulation is you can minimize the absolute value of some components of F, even though the absolute value function is not smooth.

In order to use this option, reorder the elements of F, if necessary, so the first elements are those for which you want the minimum absolute value.

For example, consider the problem in Examples. Modify the problem to find the minimum of the maximum absolute values of all fi(x). Solve this problem by invoking fminimax with the commands

x0 = [0.1; 0.1]; % Make a starting guess at the solution
options = optimoptions('fminimax','MinAbsMax',5); % Minimize abs. values
[x,fval] = fminimax(@myfun,x0,...
                    [],[],[],[],[],[],[],options);

After seven iterations, the solution is

x = 
     4.9256
     2.0796
fval = 
    37.2356 -37.2356 -6.8357 -7.0052 -0.9948

Limitations

The function to be minimized must be continuous. fminimax might only give local solutions.

More About

expand all

Algorithms

fminimax internally reformulates the minimax problem into an equivalent Nonlinear Linear Programming problem by appending additional (reformulation) constraints of the form Fi(x) ≤ γ to the constraints given in Equation, and then minimizing γ over x. fminimax uses a sequential quadratic programming (SQP) method [1] to solve this problem.

Modifications are made to the line search and Hessian. In the line search an exact merit function (see [2] and [4]) is used together with the merit function proposed by [3] and [5]. The line search is terminated when either merit function shows improvement. The function uses a modified Hessian that takes advantage of the special structure of this problem. Using optimoptions to set the MeritFunction option to'singleobj' uses the merit function and Hessian used in fmincon.

See also SQP Implementation for more details on the algorithm used and the types of procedures printed under the Procedures heading when you set the Display option to'iter'.

References

[1] Brayton, R.K., S.W. Director, G.D. Hachtel, and L.Vidigal, "A New Algorithm for Statistical Circuit Design Based on Quasi-Newton Methods and Function Splitting," IEEE Trans. Circuits and Systems, Vol. CAS-26, pp. 784-794, Sept. 1979.

[2] Grace, A.C.W., "Computer-Aided Control System Design Using Optimization Techniques," Ph.D. Thesis, University of Wales, Bangor, Gwynedd, UK, 1989.

[3] Han, S.P., "A Globally Convergent Method For Nonlinear Programming," Journal of Optimization Theory and Applications, Vol. 22, p. 297, 1977.

[4] Madsen, K. and H. Schjaer-Jacobsen, "Algorithms for Worst Case Tolerance Optimization," IEEE Trans. of Circuits and Systems, Vol. CAS-26, Sept. 1979.

[5] Powell, M.J.D., "A Fast Algorithm for Nonlinearly Constrained Optimization Calculations," Numerical Analysis, ed. G.A. Watson, Lecture Notes in Mathematics, Vol. 630, Springer Verlag, 1978.

Was this topic helpful?