I am estimating probit model and some other nonlinear models with Gauss. After it converges at 1e-5 tolerance using optmum or Qnewton to do the optimization, I plug in the estimated coefficients as the starting value. I expect that there will be no iteration since it's already converged. But it iterates again. And the gradient of the first iteration sometimes is even larger than 1e-2. Why is it?
3 Answers
0
How many places do you save from converged values? The solution is for 16 decimal places and if your start values are less than that, 5 places, say, then it will have to iterate for those remaining 11 places.
0
Thank you! I saved 8 decimal places, and the tolerance is 1e-5. But when it iterates again, some of the gradients are even greater than 1e-4. When it converges again at 1e-5, some of the estimates are different than the starting value (or previous estimates) at 2nd decimal.
Maybe I have a really flat objective function?
0
You are likely right about the flat function. Check the condition of your Hessian. I'm not sure what Application you're using, but if it's CMLMT,
struct cmlmtResults out;
out = CMLmt(&lpr,p0,d0,c0);
print log10(cond(out.hessian));
The number printed is the approximate number of places lost in computing the inverse for the covariance matrix of the parameters. If it's greater than 8, it is ill-conditioned indicating a relatively flat function. If it's greater than 16, it is a not positive definite matrix.