infinite in the covariance matrix, what does it mean?

i run a program which is written using the cmlmt library, and i get the following result regarding the covariance matrix.

 

 

Parameters         Estimates         Std. err.           Est./s.e.           Prob.            Gradient
---------------------------------------------------------------------
params[1,1]         1.2123              0.0000            +INF              0.0000            3.1612
params[2,1]         0.0086            0.0047            1.852              0.0641             56.8115
params[3,1]         0.0621            0.0010             64.384           0.0000            -0.9282
params[4,1]         0.0000            .                          .                        0.0000            -1731.0942

Correlation matrix of the parameters
-INF                  -INF                  +INF                      .
-INF                     1                     0.015254145       .
+INF             0.015254145       1                            .
.                            .                           .                           .

i dont understand the dot in the matrix above. is there anything wrong with my program that has lead to this result??

4 Answers



0



The "dot" or missing value indicates that your 4th parameter is on a constraint boundary.  Standard errors are not available for parameters on constraint boundaries.  The +INF is a different problem.  This appears to be a failure of precision in calculating the Hessian.  I would need to see the Hessian before stating exactly what happened.  Print out and post the Hessian member of the output structure:

struct cmlmtResults out;

out = cmlmt(&lpr,p0,d0,c0);

print out.Hessian;

 



0



return code = 0
normal convergence

Log-likelihood 12209.6
Number of cases 2058

Covariance of the parameters computed by the following method:
ML covariance matrix
Parameters Estimates Std. err. Est./s.e. Prob. Gradient
---------------------------------------------------------------------
params[1,1] 4.2010 1.3701 3.066 0.0022 -2.9563
params[2,1] 0.0198 0.0029 6.839 0.0000 1.2997
params[3,1] 0.0001 . . 0.0000-4373640.5893
params[4,1] -1.2359 0.0031 -398.569 0.0000 -1395.9092

Correlation matrix of the parameters
1 -0.62747549 . -0.12393441
-0.62747549 1 . 0.087242746
. . . .
-0.12393441 0.087242746 . 1

Wald Confidence Limits

0.95 confidence limits
Parameters Estimates Lower Limit Upper Limit Gradient
----------------------------------------------------------------------
params[1,1] 4.2010 1.5140 6.8879 -2.9563
params[2,1] 0.0198 0.0141 0.0254 1.2997
params[3,1] 0.0001 . . -4373640.5893
params[4,1] -1.2359 -1.2420 -1.2298 -1395.9092

Number of iterations 46
Minutes to convergence 0.01423

0.88569506 260.79136 -59093.077 27.286083
260.79136 197317.52 16008.377 -1770.9169
-59093.077 16008.377 2.7946348e+011 -1.8512488e+008
27.286083 -1770.9169 -1.8512488e+008 105643.07

//////////////////////////////////////////////////////////

i am also interested in the huge number in the gradient column



0



This is a different result, and I don't know its relationship is with the previous results.  In this case there isn't an +INF so I can't respond to that issue.  The third parameter is on a constraint boundary.   The gradient is essentially composed of Lagrangian coefficients measuring the constraints.  The size of the third element of the gradient indicates that the constraint on that parameter fits very poorly.   Even though it's not on a constraint boundary, the fourth parameter is being forced to a value that fits poorly as a result of the constraint.

The Hessian is negative definite.  It is likely that you didn't provide analytical derivatives in your likelihood procedure and thus the Hessian, and the gradient, are computed numerically.  A negative definite Hessian indicates a problem with the precision of the calculations.  If you were to provide analytical derivatives, you would get more efficient iterations and better estimate of the Hessian.

The calculation of the covariance matrix requires a positive definite Hessian, and when it is negative definite a generalized inverse is used instead of the usual inverse.  The calculations when there are constraints is described in Section 3.8 of the CMLMT Manual.

log(cond(Hessian)) is a measure of how many decimal places are lost in computing its inverse.  In your case it is 11.72.  There are 16 places of accuracy on your computer.  Eight places of accuracy are lost in computing the Hessian numerically (four for the gradient), and thus roughly 20 places of accuracy are being lost in computing the covariance matrix of the parameters, in other words, there are no places of accuracy left at all.  The numbers being printed here have no real meaning.

That CMLMT isn't able to calculate a positive definite Hessian suggests that the data you are using are somewhat inadequate for determining the parameters of your model.  The fix is either to get more and better data, or modify your model to better represent what is happening in the data.

That you have a parameter on a constraint boundary with a large first derivative also suggests a lack of fit between the model and the data.

A much better assessment of the fit of the data and the model in your case requires an analytical Hessian.  It is possible that much of the problem is due to the degradation of the precision in calculating the log-likelihood.  Such a problem is often due to the use of exp() or a power function which can generate very large numbers that very quickly erode the precision of the calculations.  Mixing small numbers with very large numbers is a recipe for catastrophic failure of the calculations.

If you were to print the procedure for computing the log-likelihood here I might be able to make some suggestions.



0



Very kind of you, thanks. the loglikelihood procedure is posted below. you are right that the procedure dont have a calculation for its gradient.

////////////////////////////////////////////////////////////////////////////////

struct DS d0;
// create a default data structure
d0 = reshape(dsCreate,2,1);
d0[1].datamatrix = r1dayF;
d0[2].datamatrix = r1dayL;

struct PV p0;
// creates a default parameter structure
p0 = pvCreate;
p0 = pvPackm(p0,5|.0242|.0612|.5,"params",1|1|1|1);

struct cmlmtControl c0;

// creates a default structure
c0 = cmlmtControlCreate;
c0.Bounds = { -50 50,
-50 50,
0 50,
-50 50 };
c0.NumObs = 2058;

struct cmlmtResults out;

out = cmlmt(&cklslnl,p0,d0,c0);

// prints the results
call cmlmtprt(out);
print out.Hessian;

proc cklslnl(struct pv p, struct ds d, ind);
local params/*,a,b,sigma,k*/,r1dayF,r1dayL,
mean,std,pdf;

struct modelResults mm;

if ind[1];
params = pvUnpack(p,"params");
// a = params[1];
// b = params[2];
// sigma = params[3];
// k = params[4];

r1dayF = d[1].datamatrix;
r1dayL = d[2].datamatrix;

mean = r1dayL+params[1]*(params[2]-r1dayL)/250;
std = params[3]*r1dayL.^params[4]/250^.5;

pdf = exp(-(r1dayF-mean).^2./(2*std.^2))./((2*pi)^.5.*std);
mm.function = sumc(ln(pdf));

endif;
retp(mm);
endp;

////////////////////////////////////////////////////////////////////////////

you said that the data may be inadequate, but unfortunately that`s all what i can get. i thought that a sample with 2058 observations is enough to estimate the model.

regarding the hessian matrix. for this model, it`s difficult to find the analytical representation. if you can give some advice on the analytical hessian, it would be highly appreciated!

Your Answer

4 Answers

0

The "dot" or missing value indicates that your 4th parameter is on a constraint boundary.  Standard errors are not available for parameters on constraint boundaries.  The +INF is a different problem.  This appears to be a failure of precision in calculating the Hessian.  I would need to see the Hessian before stating exactly what happened.  Print out and post the Hessian member of the output structure:

struct cmlmtResults out;

out = cmlmt(&lpr,p0,d0,c0);

print out.Hessian;

 

0

return code = 0
normal convergence

Log-likelihood 12209.6
Number of cases 2058

Covariance of the parameters computed by the following method:
ML covariance matrix
Parameters Estimates Std. err. Est./s.e. Prob. Gradient
---------------------------------------------------------------------
params[1,1] 4.2010 1.3701 3.066 0.0022 -2.9563
params[2,1] 0.0198 0.0029 6.839 0.0000 1.2997
params[3,1] 0.0001 . . 0.0000-4373640.5893
params[4,1] -1.2359 0.0031 -398.569 0.0000 -1395.9092

Correlation matrix of the parameters
1 -0.62747549 . -0.12393441
-0.62747549 1 . 0.087242746
. . . .
-0.12393441 0.087242746 . 1

Wald Confidence Limits

0.95 confidence limits
Parameters Estimates Lower Limit Upper Limit Gradient
----------------------------------------------------------------------
params[1,1] 4.2010 1.5140 6.8879 -2.9563
params[2,1] 0.0198 0.0141 0.0254 1.2997
params[3,1] 0.0001 . . -4373640.5893
params[4,1] -1.2359 -1.2420 -1.2298 -1395.9092

Number of iterations 46
Minutes to convergence 0.01423

0.88569506 260.79136 -59093.077 27.286083
260.79136 197317.52 16008.377 -1770.9169
-59093.077 16008.377 2.7946348e+011 -1.8512488e+008
27.286083 -1770.9169 -1.8512488e+008 105643.07

//////////////////////////////////////////////////////////

i am also interested in the huge number in the gradient column

0

This is a different result, and I don't know its relationship is with the previous results.  In this case there isn't an +INF so I can't respond to that issue.  The third parameter is on a constraint boundary.   The gradient is essentially composed of Lagrangian coefficients measuring the constraints.  The size of the third element of the gradient indicates that the constraint on that parameter fits very poorly.   Even though it's not on a constraint boundary, the fourth parameter is being forced to a value that fits poorly as a result of the constraint.

The Hessian is negative definite.  It is likely that you didn't provide analytical derivatives in your likelihood procedure and thus the Hessian, and the gradient, are computed numerically.  A negative definite Hessian indicates a problem with the precision of the calculations.  If you were to provide analytical derivatives, you would get more efficient iterations and better estimate of the Hessian.

The calculation of the covariance matrix requires a positive definite Hessian, and when it is negative definite a generalized inverse is used instead of the usual inverse.  The calculations when there are constraints is described in Section 3.8 of the CMLMT Manual.

log(cond(Hessian)) is a measure of how many decimal places are lost in computing its inverse.  In your case it is 11.72.  There are 16 places of accuracy on your computer.  Eight places of accuracy are lost in computing the Hessian numerically (four for the gradient), and thus roughly 20 places of accuracy are being lost in computing the covariance matrix of the parameters, in other words, there are no places of accuracy left at all.  The numbers being printed here have no real meaning.

That CMLMT isn't able to calculate a positive definite Hessian suggests that the data you are using are somewhat inadequate for determining the parameters of your model.  The fix is either to get more and better data, or modify your model to better represent what is happening in the data.

That you have a parameter on a constraint boundary with a large first derivative also suggests a lack of fit between the model and the data.

A much better assessment of the fit of the data and the model in your case requires an analytical Hessian.  It is possible that much of the problem is due to the degradation of the precision in calculating the log-likelihood.  Such a problem is often due to the use of exp() or a power function which can generate very large numbers that very quickly erode the precision of the calculations.  Mixing small numbers with very large numbers is a recipe for catastrophic failure of the calculations.

If you were to print the procedure for computing the log-likelihood here I might be able to make some suggestions.

0

Very kind of you, thanks. the loglikelihood procedure is posted below. you are right that the procedure dont have a calculation for its gradient.

////////////////////////////////////////////////////////////////////////////////

struct DS d0;
// create a default data structure
d0 = reshape(dsCreate,2,1);
d0[1].datamatrix = r1dayF;
d0[2].datamatrix = r1dayL;

struct PV p0;
// creates a default parameter structure
p0 = pvCreate;
p0 = pvPackm(p0,5|.0242|.0612|.5,"params",1|1|1|1);

struct cmlmtControl c0;

// creates a default structure
c0 = cmlmtControlCreate;
c0.Bounds = { -50 50,
-50 50,
0 50,
-50 50 };
c0.NumObs = 2058;

struct cmlmtResults out;

out = cmlmt(&cklslnl,p0,d0,c0);

// prints the results
call cmlmtprt(out);
print out.Hessian;

proc cklslnl(struct pv p, struct ds d, ind);
local params/*,a,b,sigma,k*/,r1dayF,r1dayL,
mean,std,pdf;

struct modelResults mm;

if ind[1];
params = pvUnpack(p,"params");
// a = params[1];
// b = params[2];
// sigma = params[3];
// k = params[4];

r1dayF = d[1].datamatrix;
r1dayL = d[2].datamatrix;

mean = r1dayL+params[1]*(params[2]-r1dayL)/250;
std = params[3]*r1dayL.^params[4]/250^.5;

pdf = exp(-(r1dayF-mean).^2./(2*std.^2))./((2*pi)^.5.*std);
mm.function = sumc(ln(pdf));

endif;
retp(mm);
endp;

////////////////////////////////////////////////////////////////////////////

you said that the data may be inadequate, but unfortunately that`s all what i can get. i thought that a sample with 2058 observations is enough to estimate the model.

regarding the hessian matrix. for this model, it`s difficult to find the analytical representation. if you can give some advice on the analytical hessian, it would be highly appreciated!


You must login to post answers.

Have a Specific Question?

Get a real answer from a real person

Need Support?

Get help from our friendly experts.

Try GAUSS for 14 days for FREE

See what GAUSS can do for your data

© Aptech Systems, Inc. All rights reserved.

Privacy Policy