floating point precision

Is there a possibility to change the default double precision to quad precision?

Best regards,

Piotr

5 Answers



1



Sorry, I don't want to horn in, but couldn't you "symmetrise" your moment matrices? If you have a matrix A which should be symmetric in exact arithmetic, but not necessarily in numerical arithmetic, you can redefine it as

A = 0.5 * (A + A')

and then it'll be symmetric.



0



GAUSS does not support quad precision math at this time. Are you wanting this for an optimization problem?

aptech

1,773


0



I'm now working on some procedures for multivariate analysis of panel data. My problem arises when the dimension of the vector process (the number of cross-sections times the number of variables) exceeds 25-30. After a number of calculations involving moment matrices the outcome should be still symmetric, however, there appear differences on 13-14 significant digit. As a result, sometimes the determinant for this matrix is slightly negative, since the smallest eigenvalue occurs to be negative, so I cannot compute, for example, a square root of this matrix using eigenvalue decomposition.

Best regards,
PiotrK



0



@sebszab That is a creative solution and please always feel free to contribute.

@PiotrK If your problem is only with the computation of the eigenvalues, you can use either eigh ( or eighv if you need the eigenvectors). The eigh functions assume a symmetric matrix and will only grab data from half of the matrix which should solve your problem.

aptech

1,773


0



@sebszab Indeed, such symmetrisation/averaging could be useful, however I would rather prefer to have control over the precision of calculations.

@aptech I use eighv function. Let me consider the following code:

proc (1) = smatpwr(M, p);                     // computes matrix M^p
local va, ve;
local diag_va, Mp;
{va, ve} = eighv(M);
va = va.^p;
diag_va = diagrv(zeros(rows(va),rows(va)), va);
Mp = ve*diag_va*ve';
retp (Mp);
endp;

proc (2) = eigen(M, N);                       // eigenvalue decomposition
local Np;
local va, ve, ind;
Np = smatpwr(N,-0.5);
{va, ve} = eighv(Np*M*Np);
ve = Np*ve;
ind = sortind(va);
ind = rev(ind);
va = va[ind];
ve = ve[.,ind];
retp (va, ve);
endp;

Obviously, the square root of symmetric matrix can be computed by means of spectral decomposition if all eigenvalues are positive. Unfortunatelly, the eighv function can produce negative eigenvalues (I use Gauss 14), for example: -3.9457992e-010. Presumably, I can skip such insignificant eigenvalues or the procedure can change such negative eigevalue into a very small positive one. The other solution would be to employ symmetrisation/averaging, as proposed by sebszab. Hopefully, the final result will not be affected.

Am I right that for matrices larger than 12x12 the singularity tolerance is smaller than 1.0e-14 for functions like: invpd or eighv? How it is defined for matrix of dimension 30x30 for example?

Best regards,
PiotrK

Your Answer

5 Answers

1

Sorry, I don't want to horn in, but couldn't you "symmetrise" your moment matrices? If you have a matrix A which should be symmetric in exact arithmetic, but not necessarily in numerical arithmetic, you can redefine it as

A = 0.5 * (A + A')

and then it'll be symmetric.

0

GAUSS does not support quad precision math at this time. Are you wanting this for an optimization problem?

0

I'm now working on some procedures for multivariate analysis of panel data. My problem arises when the dimension of the vector process (the number of cross-sections times the number of variables) exceeds 25-30. After a number of calculations involving moment matrices the outcome should be still symmetric, however, there appear differences on 13-14 significant digit. As a result, sometimes the determinant for this matrix is slightly negative, since the smallest eigenvalue occurs to be negative, so I cannot compute, for example, a square root of this matrix using eigenvalue decomposition.

Best regards,
PiotrK

0

@sebszab That is a creative solution and please always feel free to contribute.

@PiotrK If your problem is only with the computation of the eigenvalues, you can use either eigh ( or eighv if you need the eigenvectors). The eigh functions assume a symmetric matrix and will only grab data from half of the matrix which should solve your problem.

0

@sebszab Indeed, such symmetrisation/averaging could be useful, however I would rather prefer to have control over the precision of calculations.

@aptech I use eighv function. Let me consider the following code:

proc (1) = smatpwr(M, p);                     // computes matrix M^p
local va, ve;
local diag_va, Mp;
{va, ve} = eighv(M);
va = va.^p;
diag_va = diagrv(zeros(rows(va),rows(va)), va);
Mp = ve*diag_va*ve';
retp (Mp);
endp;

proc (2) = eigen(M, N);                       // eigenvalue decomposition
local Np;
local va, ve, ind;
Np = smatpwr(N,-0.5);
{va, ve} = eighv(Np*M*Np);
ve = Np*ve;
ind = sortind(va);
ind = rev(ind);
va = va[ind];
ve = ve[.,ind];
retp (va, ve);
endp;

Obviously, the square root of symmetric matrix can be computed by means of spectral decomposition if all eigenvalues are positive. Unfortunatelly, the eighv function can produce negative eigenvalues (I use Gauss 14), for example: -3.9457992e-010. Presumably, I can skip such insignificant eigenvalues or the procedure can change such negative eigevalue into a very small positive one. The other solution would be to employ symmetrisation/averaging, as proposed by sebszab. Hopefully, the final result will not be affected.

Am I right that for matrices larger than 12x12 the singularity tolerance is smaller than 1.0e-14 for functions like: invpd or eighv? How it is defined for matrix of dimension 30x30 for example?

Best regards,
PiotrK


You must login to post answers.

Have a Specific Question?

Get a real answer from a real person

Need Support?

Get help from our friendly experts.

Try GAUSS for 14 days for FREE

See what GAUSS can do for your data

© Aptech Systems, Inc. All rights reserved.

Privacy Policy