I want to solve a system of nonlinear equations and I use Nonlinear equations MT 1.0. According to the manual, the number of equations must be equal to the number of variables. So the procedure cannot solve overidentified equations where the vector of function values is higher than the number of parameters? If not, is there any other way to solve such an equation? I am looking for something like `fsolve`

in Matlab.

Thanks a lot for any help!

Szabolcs

## 7 Answers

I'm not sure why you want standard errors for your estimates. In systems of equations you have coefficients and variables, and you are actually estimating the variables that satisfy those coefficients. In an overidentified system you have multiple estimates for each of the variables, and it might be interesting how well these estimates fit the system. A statistical model would propose that that the estimates arise from a population distribution and when computing standard errors we would be testing sample estimates. But in a system of equations there isn't a population about which you would be making inferences.

I would be interested in the objective function being very close to zero. The problem there is that this function value depends on the scale of the variables and so I'm not sure how you would evaluate it. It might be possible to use the standard errors as a pragmatic way of determining the variability of the estimated variable values. In that case the "numObs" would be the number of equations. You wouldn't want to use a t-statistic here because you aren't testing the value of the estimate against zero (though I suppose you might in which case the t-statistic would be useful). But in using the standard error to evaluate the variability of the estimate, we again have the problem that it is dependent on the scaling of the variable, and I'm not sure how to deal with that.

When you have more equations than parameters, you have an overdetermined system, and there may not be a set of parameters that satisfy all of the equations. What you can do is find a best fitting set of parameters in the least squares sense.

For solving this problem you can use any of the optimization Applications, e.g., OptimizationMT or one of the functions in the GAUSS Run-Time Library, such as, QNewtonmt(). Define F(x;b) as your system of equations where F(x;b) = 0, then your objective function would sumc(F(x;b)^2).

Your objective function probably returned a scalar value. In that case you need to tell CMLMT about the number of observations. Set

c0.numObs = 10; // or whatever it it is.

Great answer, thanks a lot. In fact, `fsolve`

in Matlab does the same, it minimises the sum of squares of function values.

Thanks again!

I did as you proposed and estimate the parameters with CMLMT. The optimum is found greatly, the results make complete sense, but I cannot obtain standard errors. When I set `c0.CovParType=1`

, I get this error message:

**G0094**: Argument out of range [

*cmlmt.src*, line 1704]

#### Currently active call:

File cmlmt.src, line 1704, in `cmlmt`

`dv = cdftci(0.5*c1.Alpha,out1.NumObs-pvLength(out1.par)).*se;`

This simply means that it is unable to compute `out1.CovPar`

, thereby `se`

.

When I set `c0.CovParType=0`

, there is no error message, but it prints: "The covariance of the parameters failed to invert". This is somewhat puzzling for me as both the outer product of the gradient (`out.Gradient*out.Gradient'`

) and the inverse Hessian (`inv(out.Hessian)`

) can be computed. Moreover, `log(cond(out.Hessian))=2.75`

, which is not terribly bad.

What can be wrong?

Appreciate any help.

Szabolcs

Thanks a lot, I didn't notice this detail. I suppose this is needed because, when estimating the covariance matrix, CMLMT calculates an average and for that it needs a number to divide with.

In my case it is not clear what `c0.NumObs`

is, since I have no likelihood function as a sum of a vector. I have an overdetermined system `F(x) = 0`

, where `F `

is `NxM `

and `x`

is `Kx1`

(with `N>K`

), then I minimise `sumc(sumc(F.^2))`

. What is the number of observations here? `NxM`

?

Another question now that I have the opportunity to ask. My function `F`

is such that I can easily calculate analytically the Jacobian matrix. When I add `mm.Gradient`

to the indicator vector, however, the optimisation does not work any more. Is it because in `mm.Gradient`

I should provide the `1xK`

vector of derivatives of `sumc(sumc(F.^2))`

with respect to `x`

and not that of `F`

? If so, I'd rather ignore it as it is much harder to compute that.

Thanks again!

Szabolcs

Good point, it makes much sense.

I have a system of 30 equations, each one a 3x1 vector equation, and I want to find 9 parameters. With pretty good starting values (meaning they are reasonable and relatively close to the solution) the iteration starts at about 500,000 and after 17 iterations it finds the optimum point where the function value is about 64. I tried with very different starting values, some unrealistic, but it always converges to the very same solution with the very same final function value.

Regarding scaling, the variables to find are actually the elements of a 3x3 matrix of VAR coefficients, the diagonal elements being close to 1 and the off-diagonal elements can take any value, bot usually small and can have either sign. I have no restriction, so the diagonal elements can even exceed one, my only constraint is that they should be positive.

Thanks again for all the help, it's a great user forum, I'll make use of it in the future!

Szabolcs