Constrained Optimization MT

More Views

Constrained Optimization MT

Availability: Out of stock

COMT solves the Nonlinear Programming problem, subject to general constraints on the parameters - linear or nonlinear, equality or inequality, using the Sequential Quadratic Programming method in combination with several descent methods selectable by...
Overview

Constrained Optimization MT

COMT solves the Nonlinear Programming problem, subject to general constraints on the parameters - linear or nonlinear, equality or inequality, using the Sequential Quadratic Programming method in combination with several descent methods selectable by the user.

COMT’s ability to handle general nonlinear functions and nonlinear constraints along with other features, such as the Trust Region Method, allow you to solve a wide range of sophisticated optimization problems. Built on the speed and number crunching ability of the GAUSS platform, COMT quickly computes solutions to large problems, making it ideal for large scale Monte Carlo or Bootstrap optimizations.

Version 2.0 is easier to use than ever!

  • New syntax options eliminate the need for PV and DS structures:
    • Decreasing the required code up to 25%.
    • Decreasing runtime up to 20%.
    • Simplifying usage.
    • Optional dynamic arguments make it simple and transparent to add extra data arguments beyond model parameters to your objective function.
    • Updated documentation and examples.
    • Fully backwards compatible with COMT 1.0.
    Platform: Windows, Mac, and Linux

    Requirements: GAUSS/GAUSS Engine/GAUSS Light v16 or higher

Features

Key Features

Descent methods

  • BFGS (Broyden, Fletcher, Goldfarb and Powell)
  • Modifed BFGS
  • DFP (Davidon, Fletcher and Powell)
  • Newton-Raphson

Line search methods

  • Augmented trust region method
  • STEPBT
  • Brent’s method
  • Half
  • Strong Wolfe’s conditions

Constraint types

  • Linear equality and inequality constraints
  • Nonlinear equality and inequality constraints
  • Bounded parameters

Advantages

Flexible

  • Supports arbitrary user-provided nonlinear equality and inequality constraints.
  • Linear equality and inequality constraints.
  • Bounded parameters.
  • Specify fixed and free parameters.
  • Dynamic algorithm switching.
  • Compute all, a subset, or none of the derivatives numerically.
  • Easily pass data other than the model parameters as extra input arguments.

Efficient

  • Threaded and thread-safe
  • Option to avoid computations that are the same for the objective function and derivatives.
  • The tremendous speed of user-defined procedures in GAUSS speeds up your optimization problems.

Trusted

  • For more than 30 years, leading researchers have trusted the efficient and numerically sound code in the GAUSS optimization packages to keep them at the forefront of their fields.

Details

Novice users will typically leave most of these options at the default values. However, they can be a great help when tackling more difficult problems.

Control options
Linear equality constraintsOptional, simple to specify, linear equality constraints.
Linear inqequality constraintsOptional, simple to specify, linear inequality constraints.
Nonlinear equality constraintsOption to provide a procedure to compute nonlinear equality constraints.
Nonlinear inqequality constraintsOption to provide a procedure to compute nonlinear inequality constraints.
Parameter boundsSimple parameter bounds of the type: lower_bd ≤ x_i ≤ upper_bd
Feasible testControls whether parameters are checked for feasibility during line search.
Trust radiusSet the size of the trust radius, or turn off the trust region method.
Descent algorithmsBFGS, Modified BFGS, DFP and Newton.
Algorithm switchingSpecify descent algorithms to switch between based upon the number of elapsed iterations, a minimum change in the objective function or line search step size.
Line search methodAugmented Lagrangian Penalty, STEPBT (quadratic and cubic curve fit), Brent’s method, half-step or Strong Wolfe’s Conditions.
Active parametersControl which parameters are active (to be estimated) and which should be fixed to their start value.
Gradient MethodEither compute an analytical gradient, or have COMT compute a numerical gradient using the forward, central or backwards difference method.
Hessian MethodEither compute an analytical Hessian, or have COMT compute a numerical Hessian using the forward, central or backwards difference method.
Gradient checkCompares the analytical gradient computed by the user supplied function with the numerical gradient to check the analytical gradient for correctness.
Random seedStarting seed value used by the random line search method to allow for repeatable code.
Print outputControls whether (or how often) iteration output is printed and whether a final report is printed.
Gradient stepAdvanced feature: Controls the increment size for computing the step size for numerical first and second derivatives.
Random search radiusThe radius of the random search if attempted.
Maximum iterationsMaximum iterations to converge.
Maximum elapsed timeMaximum number of minutes to converge.
Maximum random search attemptsMaximum allowed number of random line search attempts.
Convergence toleranceConvergence is achieved when the direction vector changes less than this amount.

Examples

Below is a full example for a simple nonlinear optimization problem.


//Make comt library available
library comt;

//Create data needed by objective procedure
Q = { 0.78 -0.02 -0.12 -0.14,
     -0.02  0.86 -0.04  0.06,
     -0.12 -0.04  0.72 -0.08,
     -0.14  0.06 -0.08  0.74 };

b = { 0.76, 0.08, 1.12, 0.68 };

//Objective procedure with 4 inputs:
//    i.      x       - The parameter vector
//    ii-iii. Q and b - Extra data needed by the objective procedure
//    ii.     ind     - The indicator vector
proc  qfct(x, Q, b, ind);
    //Declare 'mm' to be a modelResults
    //struct local to this procedure, 'qfct'
    struct modelResults mm;
   
    //If the first element of the indicator
    //vector is non-zero, compute function value
    //and assign it to the 'function' member
    //of the modelResults struct
    if ind[1];
        mm.function = 0.5*x'*Q*x - x'b;
    endif;
    
    //Return modelResults struct
    retp(mm);
endp;

//Starting parameter values
x_strt = { 1, 1, 1, 1 };

//Minimize objective function
struct comtResults out;
out = comt(&qfct,x_strt,Q,b);

//Print results held in comtResults structure
call comtPrt(out);

The above example will create the following report:

=============================
 COMT Version 2.0.0                         
=============================

Return code    = 0   
Function value = -2.17466  
Convergence    : normal convergence

Parameters  Estimates    Gradient
-----------------------------------
x[1,1]         1.5350      0.0000
x[2,1]         0.1220      0.0000
x[3,1]         1.9752      0.0000
x[4,1]         1.4130      0.0000


Number of iterations    16

Product Inquiry

2 + 3 = enter the result ex:(3 + 2 = 5)



Try GAUSS for 14 days for FREE

See what GAUSS can do for your data

© Aptech Systems, Inc. All rights reserved.

Privacy Policy