 # Same results with equal optimization procedure

Hello,

I am running a minimization procedure using optmum with a ML function in the context of a Kalman Filter model. When I get the outputs I use them to get the covariance matrix by inverting the hessian. Then I take square roots and get the standard errors.

The problem is that if I run the code several times I get different values for the standard errors. This is a bit confusing 🙁

I copy parts of my code, may be they help

1. the optimization call:

```_opgtol=0.00001;
{xout,fout,gout,cout}=optmum(&lik_fcn,PRMTR_IN);
```

2. the inversion of the hessian:

```COV=invpd(_opfhess);
```

3. standard error:

```SD_fnl =sqrt(diag(COV));
```

0

What is the condition number of the Hessian?  d = log(cond(_opfhess)) is an approximate measure of the number of digits lost in computing an inverse.  If the Hessian in _opfhess is computed numerically, 8 digits are lost computing the Hessian.  If d is greater than 8, you have lost all accuracy in the standard errors.   It also indicates a catastrophic loss of precision in the calculations and under that condition the Hessian calculation can vary with very small differences in rounding errors. 0

I checked the condition of the Hessian and is 4.4.

But in any case, I don't think the rounding problem will be the cause for the standard errors of all the coefficients to change and even be all equal in some cases.

May be in my previous post I was not very clear, but the thing is I am running several times the same code for the same data and I get different values for the standard errors of the estimated coefficients.

I believe it is a data problem. When I omit the first 16 iterations in my Kalman Filter loop this problem disappears.

However, the problem I have now is that for different data vectors (different countries but the same variable) the code does not converge.

Could you may be give me an idea which underlying problems may be there when convergence is not achieve?

Sorry if my question is too general. Babi

6

0

What is happening is that when the main optimization methods in Optmum get stuck, a random search is used. The random search is what is causing differences in the standard errors. If you set the random number seed with the rndseed command, that will give the same answer from run to run. aptech

1,728

0

What is the condition number of the Hessian?  d = log(cond(_opfhess)) is an approximate measure of the number of digits lost in computing an inverse.  If the Hessian in _opfhess is computed numerically, 8 digits are lost computing the Hessian.  If d is greater than 8, you have lost all accuracy in the standard errors.   It also indicates a catastrophic loss of precision in the calculations and under that condition the Hessian calculation can vary with very small differences in rounding errors. 0

I checked the condition of the Hessian and is 4.4.

But in any case, I don't think the rounding problem will be the cause for the standard errors of all the coefficients to change and even be all equal in some cases.

May be in my previous post I was not very clear, but the thing is I am running several times the same code for the same data and I get different values for the standard errors of the estimated coefficients.

I believe it is a data problem. When I omit the first 16 iterations in my Kalman Filter loop this problem disappears.

However, the problem I have now is that for different data vectors (different countries but the same variable) the code does not converge.

Could you may be give me an idea which underlying problems may be there when convergence is not achieve?

Sorry if my question is too general. 0

What is happening is that when the main optimization methods in Optmum get stuck, a random search is used. The random search is what is causing differences in the standard errors. If you set the random number seed with the rndseed command, that will give the same answer from run to run. aptech
1,728

### Have a Specific Question?

Get a real answer from a real person

### Need Support?

Get help from our friendly experts.