Problem using intquad2, intquad3 functions

Hello,

I am having trouble understanding how to use the intquad2, intquad3 GAUSS functions.

For example, while using intquad2 function, when I define the limit of the two normal variables as follows:

eLim1 = 5.7*ones(100,1)|-5.7*ones(100,1);

eLim2 = 5.7*ones(100,1)|-5.7*ones(100,1);

If the global _intord is fixed to 40, then eLim becomes a 100 x 40 matrix, but eLim2 is a 100 x 1 matrix. And I have trouble using eLim1 and eLim2, for instance I want to calculate the bivariate or trivariate normal pdf for eLim1, eLim2 (and eLim3 in case of  trivariate), but I cannot concatenate eLim1 and eLim2, because the row/column numbers do not match.

Why eLim1, eLim2 have different dimension, and how to manipulate these two variables if I want to use intquad2/intquad3?

I would appreciate your help in this regard.

Thanks,

Annesha

 

 

8 Answers



0



I think I understand what you are trying to do and where you are going wrong. When using intquad2, each of the vectors representing the limits of integration needs to be a 2x1 or 2xN vector. The value of _intord does not change this. Here is a simple, fully contained example:

proc (1) = myProc(x, y);
    retp(cos(x .* y));
endp;

x1 = { -5.7,
        5.7 };
y1 = { -5.7, 
        5.7 };

area = intquad2(&myProc, x1, y1);
print "area = " area;

Try the above example and then add a statement changing the value of _intord. You will see that the example works in either case. A higher value of _intord will return a higher accuracy, but the inputs will be the same.

The limits of integration must be passed in with 2 rows, as above. However, you can pass in multiple columns. One integration will be performed for each column. For didactic purposes, let's run the same code, but we will add two identical columns to x1 and y1:

proc (1) = myProc(x, y);
    retp(cos(x .* y));
endp;

x1 = { -5.7 -5.7,
        5.7 5.7 };
y1 = { -5.7 -5.7, 
        5.7  5.7 };

area = intquad2(&myProc, x1, y1);
print "area = " area;

The above code will return a 2x1 vector where both elements are identical. This is because intquad2 is performing one integration for each column of x1 and y1. Since each column has the same limits, they both return the same answer. This is not typical usage, but illustrates the point. A more meaningful example might be something like this:

proc (1) = myProc(x, y);
    retp(cos(x .* y));
endp;

x1 = { 0   0.5, 
       0.5 0.7};
y1 = { 0   0.5, 
       0.5   1};

area = intquad2(&myProc, x1, y1);
print "area = " sumc(area);

aptech

1,773


0



Hello,

Thank you for your reply. I understand what your are saying. I was actually mentioning to the following. In the example below you can see that the size of eLim1 and eLim2 varies in proc f1 and eLim is always of size 100 x 1 where as eLim2 is of size 100 x _intord, in this case eLim2 is of size 100 x 12 since _intord is set to the default value 12.

-----------------------------------------------------------------------------------------------------------------

proc lpr(x, dta);
local upLim, lrLim, eLim1, eLim2, y;

@Define upper and lower limits of the distribution@
upLim = 5.7; lrLim = -5.7;
eLim1 = (UpLim * ones(1,100))|(lrLim * ones(1,100)); @contains the uper and the lower limit@
eLim2 = (UpLim * ones(1,100))|(lrLim * ones(1,100)); @contains the upper and the lower limit@

y = intquad2(&f1, eLim1, eLim2);

retp(ln(y));
endp;

proc f1(eLim1, eLim2);

print "rows(eLim1)" rows(eLim1);
print "cols(eLim1)" cols(eLim1);

print "rows(eLim2)" rows(eLim2);
print "cols(eLim2)" cols(eLim2);

retp(0);
endp;

 



0



OK, I think I understand (though I said that last time) the problem. The function passed to intquad2 and intquad3 must be ExE conformable.

aptech

1,773


0



Sorry I did not understand what do you mean by ExE conformable.

Also, I wanted to know, is there anything I am not doing correctly? Or, is it just what I have to deal with in the f1 proc(), i.e. eLim1 and eLim2 are supposed to be of different sizes in f1?

Thanks a lot

Annesha

 

 

 



0



The question about ExE conformability is a good one. Here is an explanation. Let me know if that does not answer your question.

aptech

1,773


0



O I see, ExE stands for element by element, I understand what element by element means. I would appreciate if you could kindly answer my second question.

"Is there anything I am not doing correctly? Or, is it just what I have to deal with in the f1 proc(), i.e. eLim1 and eLim2 are supposed to be of different sizes in f1?"

 

Thanks and Regards

Annesha



0



Oh, sorry. If the function you are integrating is element by element, it should work OK with the different sizes of eLim1 and eLim2. Could you post the procedure you are trying to integrate?

aptech

1,773


0



Hello,

Below is my code where I am using two dimensional integration. The code is running fine and is producing plausible estimation results. But it is taking way more time than I would expect it to take. When I was using one dimensional integration it was taking time in the order of 10/20 minutes. Whereas, this code has taken about 14 hours to converge.

I would appreciate if you could point to me whether I could some how optimize the running time. If it is difficult to understand the code from below, I would be happy to share the code as an attachment.

Thank you and appreciate your help.

Regards.

Annesha

------------------------------------------------------------------------------------------------------------------

new; //Clears current workspace

output file = MultLatV6Result.txt reset;

library maxlik;
maxset; //reset global variables in the successive execution of the file

/*****************************************************************************
Global Variable Definitions
*****************************************************************************/
clearg beta, __row,nobs,nc,datatset,nvars, _config; //clears global symbols by setting them eqaul to zero

//__row = 100; // Number of rows to be read at a time by the log-likelihood function
nobs = 2399; // Number of observations in the dataset
nc = 3; // Number of alternatives (in the universal choice set) in the choice model
nInd = 8; //Number of indicators of the latent variable - all are assumed to be continuous
nLat = 2; //Number of latent variables in the model

@Does not matter what value you provide for beta, it is going to be replaced by the starting values given in vector "b"@
beta = {};

@This dataset contains the cleaned data matrix@
@Look into the GAUSS file MNV1.gss for data cleaning procedure@
dataset = "MNDataV4.dat";
dataMat = loadd(dataset); @dataMat is used in the f1 proc@

/*****************************************************************************
Variable Specification Area
*****************************************************************************/
/* Position of UNO variable (i.e., the column of ones) in data set. */

{ unov,ivuno } = indices(dataset,"uno");

/* Position of SERO variable (i.e., the column of zeros) in data set. */

{ serov,ivsero } = indices(dataset,"sero");

@Independent variable specification for STRUCTURAL EQUATION of LATENT VARIABLE(s)@
let ivl1 = {maleDum Age2 Age3 Commute black sero sero sero sero sero };
let ivl2 = {sero sero sero sero sero maleDum Age2 Age3 Commute black };

{ vl1,ivlt1 } = indices(dataset,ivl1');
{ vl2,ivlt2 } = indices(dataset,ivl2');

ivl = ivlt1'~ivlt2'; @Structural equation variables of latent variable(s)@

@Position of the indicator(s) of latent variable@
{ ind1p,ind1 } = indices(dataset,"Q1"|"Q2"|"Q3"|"Q13"|"Q21"|"Q28"|"Q32"|"Q50");

nvars = cols(ivl1);

/******************************************************************************
Starting values, Active Parameters, Parameter Names
******************************************************************************/
Bta_mean = 8.6262| 7.412| 8.2343| 8.56| 8.7114| 8.188| 8.0526| 8.877; @Means of measurement equations@
Bta_fld = 1.392|0| @Vector of factor loadings@
1.4289|0| @A (nLat * nInd) vactor@
0|0.9095| @Will be reshaped into a nInd x nLat Matrix@
1.4789|0|
0|0.9826|
0|1.2011|
0|1.2224|
0|0.741;

Bta_sgma = 0.8023| 1.539| 1.5672| 0.7632| 1.2394| 1.341| 1.1209| 1.1031;

Bta_corr = { 1.0 0.0,
0.0 1.0 }; @The correlation matrix of the latent variables@

@b is the starting value vector to be passed to the likelihood function@
b = -0.079| -0.3108| -0.2092| -0.4479| -0.1407| -0.1083| -0.2485| -0.1614| -0.3554| -0.152; @coefficients for the parameters in the latent variable@
b = b|vech(chol(Bta_corr)'); @Take the cholesky factor of the correlation matrix & use vech to make it a vector@
b = b|Bta_mean|Bta_fld|Bta_sgma; @The parameters of the measurement equations@

/*Specify the parameters that are active - maintaning the serial of the indicators*/
act_mean = 1|1|1|1|1|1|1|1; @Manually select which mean is fixed to zero and which mean is not fixed@
act_fld = 1.0|0.0| @Vector of factor loadings - manualy fixed@
1.0|0.0|
0.0|1.0|
1.0|0.0|
0.0|1.0|
0.0|1.0|
0.0|1.0|
0.0|1.0;
act_sgma = ones(nInd,1);
act_corr = 0|
0|0; @Active parameters in the structural equation of latent variable@
_max_active = ones(nvars,1)|act_corr|act_mean|act_fld|act_sgma;

/*Specify the names of the parameters*/
varSatLat = 0 $+ "StLat" $+ ftocv(seqa(1,1,nvars),2,0);//seqa(start, increment, total_number)
varCrrLat = 0 $+ "CrrLat" $+ ftocv(seqa(1,1,rows(act_corr)),2,0);//seqa(start, increment, total_number)
varMnInd = 0 $+ "MsMean" $+ ftocv(seqa(1,1,nInd),2,0);//seqa(start, increment, total_number)
varFlInd = 0 $+ "MsFcLd" $+ ftocv(seqa(1,1,(nInd*nLat)),2,0);//seqa(start, increment, total_number)
varSgInd = 0 $+ "MsSgma" $+ ftocv(seqa(1,1,nInd),2,0);//seqa(start, increment, total_number)

_max_ParNames = varSatLat|varCrrLat|varMnInd|varFlInd|varSgInd; // Appending all the parameter (or coefficient) names
_max_Options = { bfgs stepbt };
_max_CovPar = 1; // modify according to the type of standard errors you need, 1 - from inverse of the hessian, 2 - information identity, 3 - sandwich estimator
_max_GradTol = 0.0001;

_intord=12;
_intrec = 0;

{ x,f,g,cov,retcode } = maxprt(maxlik(dataset,0,&lpr,b));

proc lpr(x,dta);
local eLim1, eLim2, upLim, lrLim, y;

@Define upper and lower limits of the distribution@
upLim = 5.7; lrLim = -5.7;
eLim1 = (UpLim * ones(1,rows(dta)))|(lrLim * ones(1,rows(dta))); @contains the uper and the lower limit@
eLim2 = (UpLim * ones(1,rows(dta)))|(lrLim * ones(1,rows(dta))); @contains the upper and the lower limit@

@initialize the global variable beta with the parameter vector x@
beta = x;

@Will return the expected value of the probability for each person@
y = intquad2(&f1, eLim1, eLim2);

retp(ln(y));
endp;

@Function that will return the choice probabilities for each person@
proc f1(eLim1, eLim2);
local v1, v, vl, vl1, vl2, vm, pInd1, pInd, p, mean, lmbda, sigma, j, i;
local temp, temp1, corr1, corr, nChol, eLim, pdf;

nChol = nLat * (nLat + 1)/2.0; @n*(n+1)/2 - number of Cholesky factors in the correlation matrix@

@Get back the correlation matrix@
corr1 = upmat(xpnd(beta[(nvars+1):(nvars+nChol)])); @xpnd into a symmetric martix, and take the upper triangular portion@
corr = corr1' * corr1; @multiply the lower triangle with the upper to get back the correlation matrix@

@Define the pdf of omega [the error component in the structural equation of the latent variable]@
temp = {};
j = 1;
do until j>_intord;
eLim = eLim1~eLim2[.,j]; @An nobs x 2 matrix@
temp1 = eLim * inv(corr) * eLim';
temp1 = diag(temp1);
temp = temp ~ temp1;
j = j + 1;
endo;

pdf = (1/((2*pi)*sqrt(det(corr)))) * exp(-0.5*temp); @An nobs x _intord matrix@

@Structural equation of latent variable@
v1 = (ones(nLat,1) .*. beta[1:nvars])*~(dataMat[.,ivl])';

v = {};
j = 1;
do until j > nLat;
v = v~(sumc(v1[(j-1)*nvars+1:(j*nvars),.])); // v stores the latent variables value for each observation. nobs X nvar_latent. 1st col has v1, 2nd v2 and so on for all observations.
j = j+1;
endo;
@v should be a nobs x 2 matrix@

mean = beta[(nvars+nChol+1):(nvars+nChol+nInd)]; @An nInd x 1 vetor@
lmbda = reshape(beta[(nvars+nChol+nInd+1):(nvars+nChol+nInd+(nLat*nInd))],nInd,nLat); @An nInd x nLat matrix@
sigma = beta[(nvars+nChol+nInd+(nInd*nLat)+1):(nvars+nChol+nInd+(nInd*nLat)+nInd)];

j = 1;
pInd = {};
do until j > _intord;
eLim = eLim1~eLim2[.,j]; @An nobs x 2 matrix@
vl = v + eLim; @vl is a nobs x 2 matrix@
vm = mean + lmbda * vl'; @vm is a nInd x nobs matrix@
i = 1;
pInd1 = ones(nobs,1);
do until i > nInd;
pInd1 = pInd1 .* ((1 ./ sigma[i]) .* pdfn((dataMat[.,ind1[i]] - vm[i,.]') ./ sigma[i]));
i = i + 1;
endo;
pInd = pInd ~ pInd1;
j = j + 1;
endo;

@Final probability@
p = pInd .* pdf;

retp(p);
endp;

Your Answer

8 Answers

0

I think I understand what you are trying to do and where you are going wrong. When using intquad2, each of the vectors representing the limits of integration needs to be a 2x1 or 2xN vector. The value of _intord does not change this. Here is a simple, fully contained example:

proc (1) = myProc(x, y);
    retp(cos(x .* y));
endp;

x1 = { -5.7,
        5.7 };
y1 = { -5.7, 
        5.7 };

area = intquad2(&myProc, x1, y1);
print "area = " area;

Try the above example and then add a statement changing the value of _intord. You will see that the example works in either case. A higher value of _intord will return a higher accuracy, but the inputs will be the same.

The limits of integration must be passed in with 2 rows, as above. However, you can pass in multiple columns. One integration will be performed for each column. For didactic purposes, let's run the same code, but we will add two identical columns to x1 and y1:

proc (1) = myProc(x, y);
    retp(cos(x .* y));
endp;

x1 = { -5.7 -5.7,
        5.7 5.7 };
y1 = { -5.7 -5.7, 
        5.7  5.7 };

area = intquad2(&myProc, x1, y1);
print "area = " area;

The above code will return a 2x1 vector where both elements are identical. This is because intquad2 is performing one integration for each column of x1 and y1. Since each column has the same limits, they both return the same answer. This is not typical usage, but illustrates the point. A more meaningful example might be something like this:

proc (1) = myProc(x, y);
    retp(cos(x .* y));
endp;

x1 = { 0   0.5, 
       0.5 0.7};
y1 = { 0   0.5, 
       0.5   1};

area = intquad2(&myProc, x1, y1);
print "area = " sumc(area);
0

Hello,

Thank you for your reply. I understand what your are saying. I was actually mentioning to the following. In the example below you can see that the size of eLim1 and eLim2 varies in proc f1 and eLim is always of size 100 x 1 where as eLim2 is of size 100 x _intord, in this case eLim2 is of size 100 x 12 since _intord is set to the default value 12.

-----------------------------------------------------------------------------------------------------------------

proc lpr(x, dta);
local upLim, lrLim, eLim1, eLim2, y;

@Define upper and lower limits of the distribution@
upLim = 5.7; lrLim = -5.7;
eLim1 = (UpLim * ones(1,100))|(lrLim * ones(1,100)); @contains the uper and the lower limit@
eLim2 = (UpLim * ones(1,100))|(lrLim * ones(1,100)); @contains the upper and the lower limit@

y = intquad2(&f1, eLim1, eLim2);

retp(ln(y));
endp;

proc f1(eLim1, eLim2);

print "rows(eLim1)" rows(eLim1);
print "cols(eLim1)" cols(eLim1);

print "rows(eLim2)" rows(eLim2);
print "cols(eLim2)" cols(eLim2);

retp(0);
endp;

 

0

OK, I think I understand (though I said that last time) the problem. The function passed to intquad2 and intquad3 must be ExE conformable.

0

Sorry I did not understand what do you mean by ExE conformable.

Also, I wanted to know, is there anything I am not doing correctly? Or, is it just what I have to deal with in the f1 proc(), i.e. eLim1 and eLim2 are supposed to be of different sizes in f1?

Thanks a lot

Annesha

 

 

 

0

The question about ExE conformability is a good one. Here is an explanation. Let me know if that does not answer your question.

0

O I see, ExE stands for element by element, I understand what element by element means. I would appreciate if you could kindly answer my second question.

"Is there anything I am not doing correctly? Or, is it just what I have to deal with in the f1 proc(), i.e. eLim1 and eLim2 are supposed to be of different sizes in f1?"

 

Thanks and Regards

Annesha

0

Oh, sorry. If the function you are integrating is element by element, it should work OK with the different sizes of eLim1 and eLim2. Could you post the procedure you are trying to integrate?

0

Hello,

Below is my code where I am using two dimensional integration. The code is running fine and is producing plausible estimation results. But it is taking way more time than I would expect it to take. When I was using one dimensional integration it was taking time in the order of 10/20 minutes. Whereas, this code has taken about 14 hours to converge.

I would appreciate if you could point to me whether I could some how optimize the running time. If it is difficult to understand the code from below, I would be happy to share the code as an attachment.

Thank you and appreciate your help.

Regards.

Annesha

------------------------------------------------------------------------------------------------------------------

new; //Clears current workspace

output file = MultLatV6Result.txt reset;

library maxlik;
maxset; //reset global variables in the successive execution of the file

/*****************************************************************************
Global Variable Definitions
*****************************************************************************/
clearg beta, __row,nobs,nc,datatset,nvars, _config; //clears global symbols by setting them eqaul to zero

//__row = 100; // Number of rows to be read at a time by the log-likelihood function
nobs = 2399; // Number of observations in the dataset
nc = 3; // Number of alternatives (in the universal choice set) in the choice model
nInd = 8; //Number of indicators of the latent variable - all are assumed to be continuous
nLat = 2; //Number of latent variables in the model

@Does not matter what value you provide for beta, it is going to be replaced by the starting values given in vector "b"@
beta = {};

@This dataset contains the cleaned data matrix@
@Look into the GAUSS file MNV1.gss for data cleaning procedure@
dataset = "MNDataV4.dat";
dataMat = loadd(dataset); @dataMat is used in the f1 proc@

/*****************************************************************************
Variable Specification Area
*****************************************************************************/
/* Position of UNO variable (i.e., the column of ones) in data set. */

{ unov,ivuno } = indices(dataset,"uno");

/* Position of SERO variable (i.e., the column of zeros) in data set. */

{ serov,ivsero } = indices(dataset,"sero");

@Independent variable specification for STRUCTURAL EQUATION of LATENT VARIABLE(s)@
let ivl1 = {maleDum Age2 Age3 Commute black sero sero sero sero sero };
let ivl2 = {sero sero sero sero sero maleDum Age2 Age3 Commute black };

{ vl1,ivlt1 } = indices(dataset,ivl1');
{ vl2,ivlt2 } = indices(dataset,ivl2');

ivl = ivlt1'~ivlt2'; @Structural equation variables of latent variable(s)@

@Position of the indicator(s) of latent variable@
{ ind1p,ind1 } = indices(dataset,"Q1"|"Q2"|"Q3"|"Q13"|"Q21"|"Q28"|"Q32"|"Q50");

nvars = cols(ivl1);

/******************************************************************************
Starting values, Active Parameters, Parameter Names
******************************************************************************/
Bta_mean = 8.6262| 7.412| 8.2343| 8.56| 8.7114| 8.188| 8.0526| 8.877; @Means of measurement equations@
Bta_fld = 1.392|0| @Vector of factor loadings@
1.4289|0| @A (nLat * nInd) vactor@
0|0.9095| @Will be reshaped into a nInd x nLat Matrix@
1.4789|0|
0|0.9826|
0|1.2011|
0|1.2224|
0|0.741;

Bta_sgma = 0.8023| 1.539| 1.5672| 0.7632| 1.2394| 1.341| 1.1209| 1.1031;

Bta_corr = { 1.0 0.0,
0.0 1.0 }; @The correlation matrix of the latent variables@

@b is the starting value vector to be passed to the likelihood function@
b = -0.079| -0.3108| -0.2092| -0.4479| -0.1407| -0.1083| -0.2485| -0.1614| -0.3554| -0.152; @coefficients for the parameters in the latent variable@
b = b|vech(chol(Bta_corr)'); @Take the cholesky factor of the correlation matrix & use vech to make it a vector@
b = b|Bta_mean|Bta_fld|Bta_sgma; @The parameters of the measurement equations@

/*Specify the parameters that are active - maintaning the serial of the indicators*/
act_mean = 1|1|1|1|1|1|1|1; @Manually select which mean is fixed to zero and which mean is not fixed@
act_fld = 1.0|0.0| @Vector of factor loadings - manualy fixed@
1.0|0.0|
0.0|1.0|
1.0|0.0|
0.0|1.0|
0.0|1.0|
0.0|1.0|
0.0|1.0;
act_sgma = ones(nInd,1);
act_corr = 0|
0|0; @Active parameters in the structural equation of latent variable@
_max_active = ones(nvars,1)|act_corr|act_mean|act_fld|act_sgma;

/*Specify the names of the parameters*/
varSatLat = 0 $+ "StLat" $+ ftocv(seqa(1,1,nvars),2,0);//seqa(start, increment, total_number)
varCrrLat = 0 $+ "CrrLat" $+ ftocv(seqa(1,1,rows(act_corr)),2,0);//seqa(start, increment, total_number)
varMnInd = 0 $+ "MsMean" $+ ftocv(seqa(1,1,nInd),2,0);//seqa(start, increment, total_number)
varFlInd = 0 $+ "MsFcLd" $+ ftocv(seqa(1,1,(nInd*nLat)),2,0);//seqa(start, increment, total_number)
varSgInd = 0 $+ "MsSgma" $+ ftocv(seqa(1,1,nInd),2,0);//seqa(start, increment, total_number)

_max_ParNames = varSatLat|varCrrLat|varMnInd|varFlInd|varSgInd; // Appending all the parameter (or coefficient) names
_max_Options = { bfgs stepbt };
_max_CovPar = 1; // modify according to the type of standard errors you need, 1 - from inverse of the hessian, 2 - information identity, 3 - sandwich estimator
_max_GradTol = 0.0001;

_intord=12;
_intrec = 0;

{ x,f,g,cov,retcode } = maxprt(maxlik(dataset,0,&lpr,b));

proc lpr(x,dta);
local eLim1, eLim2, upLim, lrLim, y;

@Define upper and lower limits of the distribution@
upLim = 5.7; lrLim = -5.7;
eLim1 = (UpLim * ones(1,rows(dta)))|(lrLim * ones(1,rows(dta))); @contains the uper and the lower limit@
eLim2 = (UpLim * ones(1,rows(dta)))|(lrLim * ones(1,rows(dta))); @contains the upper and the lower limit@

@initialize the global variable beta with the parameter vector x@
beta = x;

@Will return the expected value of the probability for each person@
y = intquad2(&f1, eLim1, eLim2);

retp(ln(y));
endp;

@Function that will return the choice probabilities for each person@
proc f1(eLim1, eLim2);
local v1, v, vl, vl1, vl2, vm, pInd1, pInd, p, mean, lmbda, sigma, j, i;
local temp, temp1, corr1, corr, nChol, eLim, pdf;

nChol = nLat * (nLat + 1)/2.0; @n*(n+1)/2 - number of Cholesky factors in the correlation matrix@

@Get back the correlation matrix@
corr1 = upmat(xpnd(beta[(nvars+1):(nvars+nChol)])); @xpnd into a symmetric martix, and take the upper triangular portion@
corr = corr1' * corr1; @multiply the lower triangle with the upper to get back the correlation matrix@

@Define the pdf of omega [the error component in the structural equation of the latent variable]@
temp = {};
j = 1;
do until j>_intord;
eLim = eLim1~eLim2[.,j]; @An nobs x 2 matrix@
temp1 = eLim * inv(corr) * eLim';
temp1 = diag(temp1);
temp = temp ~ temp1;
j = j + 1;
endo;

pdf = (1/((2*pi)*sqrt(det(corr)))) * exp(-0.5*temp); @An nobs x _intord matrix@

@Structural equation of latent variable@
v1 = (ones(nLat,1) .*. beta[1:nvars])*~(dataMat[.,ivl])';

v = {};
j = 1;
do until j > nLat;
v = v~(sumc(v1[(j-1)*nvars+1:(j*nvars),.])); // v stores the latent variables value for each observation. nobs X nvar_latent. 1st col has v1, 2nd v2 and so on for all observations.
j = j+1;
endo;
@v should be a nobs x 2 matrix@

mean = beta[(nvars+nChol+1):(nvars+nChol+nInd)]; @An nInd x 1 vetor@
lmbda = reshape(beta[(nvars+nChol+nInd+1):(nvars+nChol+nInd+(nLat*nInd))],nInd,nLat); @An nInd x nLat matrix@
sigma = beta[(nvars+nChol+nInd+(nInd*nLat)+1):(nvars+nChol+nInd+(nInd*nLat)+nInd)];

j = 1;
pInd = {};
do until j > _intord;
eLim = eLim1~eLim2[.,j]; @An nobs x 2 matrix@
vl = v + eLim; @vl is a nobs x 2 matrix@
vm = mean + lmbda * vl'; @vm is a nInd x nobs matrix@
i = 1;
pInd1 = ones(nobs,1);
do until i > nInd;
pInd1 = pInd1 .* ((1 ./ sigma[i]) .* pdfn((dataMat[.,ind1[i]] - vm[i,.]') ./ sigma[i]));
i = i + 1;
endo;
pInd = pInd ~ pInd1;
j = j + 1;
endo;

@Final probability@
p = pInd .* pdf;

retp(p);
endp;

You must login to post answers.

Have a Specific Question?

Get a real answer from a real person

Need Support?

Get help from our friendly experts.

Try GAUSS for 14 days for FREE

See what GAUSS can do for your data

© Aptech Systems, Inc. All rights reserved.

Privacy Policy