### Aptech Systems, Inc. Worldwide Headquarters

Address:

Aptech Systems, Inc.

2350 East Germann Road, Suite #21

Chandler, AZ 85286Phone: 360.886.7100

FAX: 360.886.8922**Ready to Get Started?**### Request Quote & Product Information

### Industry Solutions

### Products

### Resources

### Support

### Training & Events

Want more guidance while learning about the full functionality of GAUSS and its capabilities? Get in touch for in-person training or browse additional references below.

### Tutorials

Step-by-step, informative lessons for those who want to dive into GAUSS and achieve their goals, fast.

### Have a Specific Question?

### Q&A: Register and Login

### Support Plans

Premier Support and Platinum Premier Support are annually renewable membership programs that provide you with important benefits including technical support, product maintenance, and substantial cost-saving features for your GAUSS System or the GAUSS Engine.

### User Forums

Join our community to see why our users are considered some of the most active and helpful in the industry!

### Where to Buy

Available across the globe, you can have access to GAUSS no matter where you are.

### Recent Tags

applications character vectors CML CMLMT Constrained Optimization datasets dates dlibrary dllcall econometrics Editor error error codes errors Excel file i/o floating network GAUSS Engine GAUSS Light graphics GUI hotkeys installation license licensing linux loading data loop loops matrix manipulation Maximum Likelihood Maxlik MaxLikMT Memory multidimensional array optimization Optmum output panel data PQG graphics procs random numbers strings structures threading### Recent Questions

- CCAPM (Consumption Based Capital Asset Pricing Model)
- How to identify the break date for Hatemi-J cointegration test
- error G0152 : Variable not initialized stack trace:
- 'selif' does not work for strings
- Some parts of the screen show very small fonts
- Integration over an interval
- CKLS 1992 Gauss Code with Structural Breaks?
- plotSetXRange is not working
- Gauss Mac Installation
- Can i get MaxLik Packge for Mac OS

### Features

### Time Series 2.0 MT

### Industry Solutions

### Find out more now

### Time Series MT 2.1

### Find out more now

### Find out more now

# Resources

# Maxlik 5 & Gauss 12 combination --- any known memory-related bugs?

I am using this combination with a relatively large data set (21,000 rows) fully held in memory. The OS is Win 7 64-bit version, 16-MB RAM.

I have run smaller versions of the estimation problem with Maxlik using the cross-product of the gradient (gg')^-1 as the covariance matrix estimator. At convergence in smaller problems gg' is invertible. With the bigger problem it is not, though the model specification has not changed.

Is it possible that the size of the data set is causing this behaviour?

Thanks for any help/insight.

Joffre

## 1 Answer

In order to compute the cross-product, the log-probabilities must be computed by observation. In your procedure you have

retp(loglikeli*ones(rows(x),1));

where `loglikeli` is a scalar log-probability being expanded to a vector of the length of the number of observations. This is not the same as a vector of log-probabilities. This is not a vector of log-probabilities by observation and thus cannot be used to compute the cross-product matrix. It appears you are estimating an MNL model in which the log-likelihood is being computed from cells of a cross-tabulation and thus log-probabilities by observation aren't generally possible.

I looked at the Hessian for this model (stored in `_max_FinalHess`) and it was negative definite. This is the result of a catastrophic failure of precision in the calculation of the Hessian. You could try working with the calculations in the log-likelihood. Precision is lost wherever large numbers are mixed with small numbers and your procedure contains the use of `exp()` along with `ln()` which are classical precision issues. Another solution would be to write a procedure for computing the analytical gradient. The Hessian would then be computed as a gradient of a gradient, increasing precision.

Large MNL models are frequently difficult to estimate because there is the tendency for an increase of zero or nearly zero cell frequencies. This results in a loss of information for standard errors. Another possible solution would be to fix some parameters to zero, or to each other thus reducing the parameter space. If you reduce the parameter space sufficiently you might have enough information to estimate the remaining parameters.

## Your Answer

## 1 Answer

In order to compute the cross-product, the log-probabilities must be computed by observation. In your procedure you have

retp(loglikeli*ones(rows(x),1));

where `loglikeli` is a scalar log-probability being expanded to a vector of the length of the number of observations. This is not the same as a vector of log-probabilities. This is not a vector of log-probabilities by observation and thus cannot be used to compute the cross-product matrix. It appears you are estimating an MNL model in which the log-likelihood is being computed from cells of a cross-tabulation and thus log-probabilities by observation aren't generally possible.

I looked at the Hessian for this model (stored in `_max_FinalHess`) and it was negative definite. This is the result of a catastrophic failure of precision in the calculation of the Hessian. You could try working with the calculations in the log-likelihood. Precision is lost wherever large numbers are mixed with small numbers and your procedure contains the use of `exp()` along with `ln()` which are classical precision issues. Another solution would be to write a procedure for computing the analytical gradient. The Hessian would then be computed as a gradient of a gradient, increasing precision.

Large MNL models are frequently difficult to estimate because there is the tendency for an increase of zero or nearly zero cell frequencies. This results in a loss of information for standard errors. Another possible solution would be to fix some parameters to zero, or to each other thus reducing the parameter space. If you reduce the parameter space sufficiently you might have enough information to estimate the remaining parameters.