Dear All,

I'm using constrained optimization and want to compute a numerical hessian. Did somebody already do this and can maybe point me to some resources or give useful hints. I already tried using CML, which provides a hessia, but the LLF is unfortunately not straightforward (compared to the M-Estimation) and I would need to specify a distribution which I'm not really comfortable with.

best

Oldes

## 4 Answers

0

There are several functions in the GAUSS Run-Time Library for computing Hessians. If you are using CO, use hessp(&f,x0) to compute your hessian where &f is the pointer to the function used in your call to CO, and the second argument is the vector of parameter estimates returned by CO. For example,

{ x,fmin,g,retcode } = co(&f,x0);

h = hessp(&f,x);

If you are using COMT, then use hessmt(&f,par,data) where &f is again the pointer to the function used in your call to COMT, par is a PV structure containing the parameter estimates returned by COMT, and data is a DS structure containing any data used by your objective function procedure. For example,

struct comtResults out;

out = comt(&f,par,d0,c0);

h = hessmt(&f,out.par,d0);

0

Be advised that if any of your parameters are on a constraint boundary, then the inverse of the Hessian is not the correct covariance matrix of the parameters. The correct "approximate" covariance matrix calculation is described in the CMLMT Manual in Section 2.9.1. This covariance matrix is itself not completely correct and for that reason is called approximate. The correct procedure for models with constrained parameters is hypothesis testing. The method is described in Section 2.9.3 of the CMLMT Manual. Functions for the calculations of the test of the hypothesis can be found in the file hypotest.src in the Run-Time Library. Instructions for the calculations can be found at the top of that file.

0

Do I see correctly, that the hessian is already multiplied by -1, and therefore it is the inverse that is the variance-covariance matrix and not the negative of the inverse?

0

If your objective function is a log-likelihood then you are minimizing -logl and thus the Hessian is already multiplied by -1 and its inverse is the variance-covariance matrix.

## Your Answer

## 4 Answers

There are several functions in the GAUSS Run-Time Library for computing Hessians. If you are using CO, use hessp(&f,x0) to compute your hessian where &f is the pointer to the function used in your call to CO, and the second argument is the vector of parameter estimates returned by CO. For example,

{ x,fmin,g,retcode } = co(&f,x0);

h = hessp(&f,x);

If you are using COMT, then use hessmt(&f,par,data) where &f is again the pointer to the function used in your call to COMT, par is a PV structure containing the parameter estimates returned by COMT, and data is a DS structure containing any data used by your objective function procedure. For example,

struct comtResults out;

out = comt(&f,par,d0,c0);

h = hessmt(&f,out.par,d0);

Be advised that if any of your parameters are on a constraint boundary, then the inverse of the Hessian is not the correct covariance matrix of the parameters. The correct "approximate" covariance matrix calculation is described in the CMLMT Manual in Section 2.9.1. This covariance matrix is itself not completely correct and for that reason is called approximate. The correct procedure for models with constrained parameters is hypothesis testing. The method is described in Section 2.9.3 of the CMLMT Manual. Functions for the calculations of the test of the hypothesis can be found in the file hypotest.src in the Run-Time Library. Instructions for the calculations can be found at the top of that file.

Do I see correctly, that the hessian is already multiplied by -1, and therefore it is the inverse that is the variance-covariance matrix and not the negative of the inverse?

If your objective function is a log-likelihood then you are minimizing -logl and thus the Hessian is already multiplied by -1 and its inverse is the variance-covariance matrix.