How to run Narayan and Popp (2010) unit root test with two structural break

Dear friends,

I have the NP (2010) coding. I wish to run the unit root test using my data in .txt file. Based on coding, it mention that ** the first column should contain the years or quarters (i.e. 1973.3) **  and the second the observations.

My question is, do I need to have the variable name for the column or should i remove it from the .txt file? Im using quarterly data which consist of 68 observation so I use the following command in the coding:

load yy[68,2]=C:\Users\\hafizah\Desktop\hp.txt;

However, the result of my test indicates that the optimal lag is 0 and I got the same for other variables. However, when I remove the year column from the .txt file and run the command, the optimal lag has become 4 and 5. But according to the author in the coding file, the first column should be the year. I've tried for other data too. When I put the year, the optimal lag become 0 and only give other than value 0 when I remove the year column. Hope somebody could help me regarding this issue.

  1. The hp.txt contain the following data

199901 12.15932752
199902 12.1605188
199903 12.16127285
199904 12.1601588
200001 12.17354945
200002 12.20002635
200003 12.22209771
200004 12.24953292
200101 12.25672023
200102 12.27340002
200103 12.29917945
200104 12.32090395
200201 12.27644336
200202 12.29779613
200203 12.3310463
200204 12.352683
200301 12.37000409
200302 12.39040593
200303 12.42399994
200304 12.44148696
200401 12.45892101
200402 12.48462339
200403 12.51449253
200404 12.53540877
200501 12.55809654
200502 12.57862165
200503 12.60764954
200504 12.63255579
200601 12.7249825
200602 12.73698159
200603 12.75354772
200604 12.76147293
200701 12.76360338
200702 12.78515899
200703 12.8450623
200704 12.82605194
200801 12.8367878
200802 12.8463801
200803 12.83849496
200804 12.88811052
200901 12.91527613
200902 12.94327846
200903 12.96138698
200904 12.99125873
201001 13.02146312
201002 13.05275595
201003 13.07740982
201004 13.10662703
201101 13.12621119
201102 13.15433397
201103 13.17839227
201104 13.20854463
201201 13.22737758
201202 13.27411048
201203 13.29385635
201204 13.31621997
201301 13.33573626
201302 13.35976042
201303 13.37956007
201304 13.39272866
201401 13.40379237
201402 13.42350852
201403 13.43862087
201404 13.4529913
201501 13.48186547
201502 13.48289551
201503 13.48817395
201504 13.49770179

 

2. The original Narayan and Popp (2010) coding are as follows:

NEW;

FORMAT /M1 /ROS 8,4;

LIBRARY pgraph;

_pdate="";

_pcolor = 1;

_pmcolor = ZEROS(8,1)|15;

 

/*

path1 = "Z:\\Stephan.Popp\\Projekte\\perronmultiple\\gauss\\output.out";

 

OUTPUT FILE = ^path1 ON;

*/

 

"******************************************";

"date: " datestr(date) "  time:  " timestr(time);

"*******************************************";

 

/*           DATA

**  the first column should contain the years or quarters (i.e. 1973.3)

**  and the second the observations

*/

 

load yy[68,2]=C:\Users\\hafizah\Desktop\lrhd.txt;           @ change path settings and yy[here include # of observations ,2]@

 

/* Randomly chosen dataset

yy=SEQA(1900,1,100)~RNDN(100,1);

*/

 

XY(yy[.,1],yy[.,2]);

 

yyy= yy[.,2];

kmax = 5;                             @ max lag                                           @

ttt = rows(yyy);

tau = 0.2;                             @ trimming factor           @

 

"Series: Log(.)";

"Sample:" yy[1,1]~yy[rows(yyy),1];

"# observations:" ttt;

"maximum lag " kmax;

"trimming factor " tau;

 

"program: popp2break.prg" ;

"*******************************";

tbunter = MAXC(3+kmax|CEIL(tau*ttt));             @ lower break date @

"break date floor, effective tau " tbunter~yy[tbunter,1]~tbunter/ttt;

tbober  = MINC(ttt-3-kmax|FLOOR((1-tau)*ttt));    @ upper break date @

"break date ceiling, effective (1-tau) " tbober~yy[tbober,1]~tbober/ttt;

"*******************************************";

 

/************** Model 0 *****************/

 

ergeb = ZEROS(2,1);     @ Vektor zum Speichern der Ergebnisse @

tbopt = ZEROS(2,1);

 

ttb1 = tbunter;

ttb2 = 0;

DO WHILE ttb1 <= tbober; @ Schleife der durchlaufenden Bruchzeitpunkte T_B @

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break0b(yyy,ttb1,ttb2,kmax);

 

IF ABS(ttheta1) > ABS(ergeb[1,1]);

ergeb[1,1] = ttheta1;

ergeb[2,1] = ttb1;

ENDIF;

 

ttb1 = ttb1 + 1;

ENDO;

 

tbopt[1] = ergeb[2,1];

 

ergeb = ZEROS(2,1);     @ Vektor zum Speichern der Ergebnisse @

 

ttb2 = tbunter;

DO WHILE ttb2 <= tbober;

IF ABS(ttb2 - tbopt[1]) < 2; ttb2 = tbopt[1] + 2; ENDIF;

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break0b(yyy,tbopt[1],ttb2,kmax);

 

IF ABS(ttheta1) > ABS(ergeb[1,1]);

ergeb[1,1] = ttheta1;

ergeb[2,1] = ttb2;

ENDIF;

 

ttb2 = ttb2 + 1;

ENDO;

 

tbopt[2] = ergeb[2,1];

tbopt = SORTC(tbopt,1);

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break0b(yyy,tbopt[1],tbopt[2],kmax);

 

"****************";

"output";

"****************";

"model type M0";

"first break  " tbopt[1]~yy[tbopt[1],1]~tbopt[1]/ROWS(yy);

"second break " tbopt[2]~yy[tbopt[2],1]~tbopt[2]/ROWS(yy);

"phi = rho-1  " rho1;

"t value      " trho1;

"optimal lag  " kk1;

"variance     " varres1;

"------";

"coeff tstat  ";

"yyverz constant du1verz du2verz dtb1 dtb2";

pp1~tstat1;

 

 

/************** Model 1 *****************/

 

ergeb = ZEROS(2,1);     @ Vektor zum Speichern der Ergebnisse @

tbopt = ZEROS(2,1);

 

ttb1 = tbunter;

ttb2 = 0;

DO WHILE ttb1 <= tbober; @ Schleife der durchlaufenden Bruchzeitpunkte T_B @

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break1b(yyy,ttb1,ttb2,kmax);

 

IF ABS(ttheta1) > ABS(ergeb[1,1]);

ergeb[1,1] = ttheta1;

ergeb[2,1] = ttb1;

ENDIF;

 

ttb1 = ttb1 + 1;

ENDO;

 

tbopt[1] = ergeb[2,1];

 

ergeb = ZEROS(2,1);     @ Vektor zum Speichern der Ergebnisse @

 

ttb2 = tbunter;

DO WHILE ttb2 <= tbober;

IF ABS(ttb2 - tbopt[1]) < 2; ttb2 = tbopt[1] + 2; ENDIF;

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break1b(yyy,tbopt[1],ttb2,kmax);

 

IF ABS(ttheta1) > ABS(ergeb[1,1]);

ergeb[1,1] = ttheta1;

ergeb[2,1] = ttb2;

ENDIF;

 

ttb2 = ttb2 + 1;

ENDO;

 

tbopt[2] = ergeb[2,1];

tbopt = SORTC(tbopt,1);

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break1b(yyy,tbopt[1],tbopt[2],kmax);

 

"output";

"****************";

"model type M1";

"first break  " tbopt[1]~yy[tbopt[1],1]~tbopt[1]/ROWS(yy);

"second break " tbopt[2]~yy[tbopt[2],1]~tbopt[2]/ROWS(yy);

"phi = rho-1  " rho1;

"t value      " trho1;

"optimal lag  " kk1;

"variance     " varres1;

"------";

"coeff tstat  ";

"yyverz constant du1verz du2verz dtb1 dtb2 time";

pp1~tstat1;

 

 

/************** Model 2 *****************/

 

ergeb = ZEROS(2,1);     @ Vektor zum Speichern der Ergebnisse @

tbopt = ZEROS(2,1);

 

ttb1 = tbunter;

ttb2 = 0;

DO WHILE ttb1 <= tbober; @ Schleife der durchlaufenden Bruchzeitpunkte T_B @

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break2b(yyy,ttb1,ttb2,kmax);

 

IF ABS(ttheta1) > ABS(ergeb[1,1]);

ergeb[1,1] = ttheta1;

ergeb[2,1] = ttb1;

ENDIF;

 

ttb1 = ttb1 + 1;

ENDO;

 

tbopt[1] = ergeb[2,1];

 

ergeb = ZEROS(2,1);     @ Vektor zum Speichern der Ergebnisse @

 

ttb2 = tbunter;

DO WHILE ttb2 <= tbober;

IF ABS(ttb2 - tbopt[1]) < 2; ttb2 = tbopt[1] + 2; ENDIF;

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break2b(yyy,tbopt[1],ttb2,kmax);

 

IF ABS(ttheta1) > ABS(ergeb[1,1]);

ergeb[1,1] = ttheta1;

ergeb[2,1] = ttb2;

ENDIF;

 

ttb2 = ttb2 + 1;

ENDO;

 

tbopt[2] = ergeb[2,1];

tbopt = SORTC(tbopt,1);

 

{rho1,trho1,ttheta1,pp1,rres1,varres1,tstat1,kk1} = mio2break2b(yyy,tbopt[1],tbopt[2],kmax);

 

"output";

"****************";

"model type M2";

"first break  " tbopt[1]~yy[tbopt[1],1]~tbopt[1]/ROWS(yy);

"second break " tbopt[2]~yy[tbopt[2],1]~tbopt[2]/ROWS(yy);

"phi = rho-1  " rho1;

"t value      " trho1;

"optimal lag  " kk1;

"variance     " varres1;

"------";

"coeff tstat  ";

"yylagged constant du1lagged du2lagged dtb1 dtb2 dt1verz dt2verz time";

pp1~tstat1;

 

OUTPUT OFF;

 

END;

 

PROC(8)=mio2break0b(yy,ttb1,ttb2,kkmax);

LOCAL t,mu,du1,du1verz,du2,du2verz,dtb1,dtb2,yyverz,dyy,x1,x,pp,rres;

LOCAL varres,varpp,tstat,kk,ampel,xx,ttheta;

 

t=ROWS(yy);

mu=ONES(t,1);              @ Absolutglied @

du1=ZEROS(t,1);             @ Dummy @

du1[ttb1+1:t,1]=ONES(t-ttb1,1);

du1verz=lagn(du1,1);                                     @ Dummy verz�gert DU_t-1 @

dtb1=ZEROS(t,1);            @ Impulsdummy @

dtb1[ttb1+1:ttb1+1,1]=1;

yyverz=lagn(yy,1);

dyy=yy-yyverz;

 

IF ttb2 == 0;

x1=yyverz~mu~du1verz~dtb1;

ELSE;

du2=ZEROS(t,1);             @ Dummy @

du2[ttb2+1:t,1]=ONES(t-ttb2,1);

du2verz=lagn(du2,1);                                  @ Dummy verz�gert DU_t-1 @

dtb2=ZEROS(t,1);            @ Impulsdummy @

dtb2[ttb2+1:ttb2+1,1]=1;

 

x1=yyverz~mu~du1verz~du2verz~dtb1~dtb2;

ENDIF;

 

IF kkmax == 0;

 

x = TRIMR(x1,1,0);

dyy = TRIMR(dyy,1,0);

pp = INV(x'x)*x'dyy;                                                                                                                                      @ Sch�tzvektor p @

rres = dyy - x*pp;                                                                                                                                                           @ Residuenvektor @

varres =(rres'*rres)/(ROWS(x)-COLS(x));                 @ Sch�tzung der Residualvarianz @

varpp = DIAG(rres'rres*INV(x'x)/(ROWS(x)-COLS(x)));     @ Sch�tzung der Varianz-Kovarianzmatrix @

tstat = pp./SQRT(varpp);                                  @ t-Statistiken der Parametersch�tzungen @

kk = 0;

ELSE;

 

x = TRIMR(x1~SHIFTR(dyy',SEQA(1,1,kkmax),0)',kkmax+1,0);

dyy = TRIMR(dyy,kkmax+1,0);

 

kk = kkmax;

ampel = 0;

 

DO WHILE kk >= 0 AND ampel == 0;

xx = x[.,1:COLS(x1)+kk];

 

pp = INV(xx'xx)*xx'dyy;                                                                                                                                             @ Sch�tzvektor p @

rres = dyy - xx*pp;                                                                                                                                                      @ Residuenvektor @

varres =(rres'*rres)/(ROWS(xx)-COLS(xx));                 @ Sch�tzung der Residualvarianz @

varpp = DIAG(rres'rres*INV(xx'xx)/(ROWS(xx)-COLS(xx)));          @ Sch�tzung der Varianz-Kovarianzmatrix @

tstat = pp./SQRT(varpp);                                  @ t-Statistiken der Parametersch�tzungen @

IF ABS(tstat[COLS(x1)+kk]) > 1.96; ampel = 1; ENDIF;   @ Signifkanzniveau 10%: 1.96 @

kk = kk-1;

ENDO;

kk = kk+1;

 

ENDIF;

 

IF ttb2 == 0;

ttheta = tstat[4];

ELSE;

ttheta = tstat[6];

ENDIF;

 

RETP(pp[1],tstat[1],ttheta,pp,rres,varres,tstat,kk);

 

ENDP;

 

PROC(8)=mio2break1b(yy,ttb1,ttb2,kkmax);

LOCAL t,mu,zeit,du1,du1verz,du2,du2verz,dtb1,dtb2,yyverz,dyy,x1,x,pp,rres;

LOCAL varres,varpp,tstat,kk,ampel,xx,ttheta;

 

t=ROWS(yy);

mu=ONES(t,1);              @ Absolutglied @

zeit=SEQA(1,1,t);          @ Zeittrend @

du1=ZEROS(t,1);             @ Dummy @

du1[ttb1+1:t,1]=ONES(t-ttb1,1);

du1verz=lagn(du1,1);                                     @ Dummy verz�gert DU_t-1 @

dtb1=ZEROS(t,1);            @ Impulsdummy @

dtb1[ttb1+1:ttb1+1,1]=1;

yyverz=lagn(yy,1);

dyy=yy-yyverz;

 

IF ttb2 == 0;

x1=yyverz~mu~du1verz~dtb1~zeit;

ELSE;

du2=ZEROS(t,1);             @ Dummy @

du2[ttb2+1:t,1]=ONES(t-ttb2,1);

du2verz=lagn(du2,1);                                  @ Dummy verz�gert DU_t-1 @

dtb2=ZEROS(t,1);            @ Impulsdummy @

dtb2[ttb2+1:ttb2+1,1]=1;

 

x1=yyverz~mu~du1verz~du2verz~dtb1~dtb2~zeit;

ENDIF;

 

IF kkmax == 0;

 

x = TRIMR(x1,1,0);

dyy = TRIMR(dyy,1,0);

pp = INV(x'x)*x'dyy;                                                                                                                                      @ Sch�tzvektor p @

rres = dyy - x*pp;                                                                                                                                                           @ Residuenvektor @

varres =(rres'*rres)/(ROWS(x)-COLS(x));                 @ Sch�tzung der Residualvarianz @

varpp = DIAG(rres'rres*INV(x'x)/(ROWS(x)-COLS(x)));     @ Sch�tzung der Varianz-Kovarianzmatrix @

tstat = pp./SQRT(varpp);                                  @ t-Statistiken der Parametersch�tzungen @

kk = 0;

ELSE;

 

x = TRIMR(x1~SHIFTR(dyy',SEQA(1,1,kkmax),0)',kkmax+1,0);

dyy = TRIMR(dyy,kkmax+1,0);

 

kk = kkmax;

ampel = 0;

 

DO WHILE kk >= 0 AND ampel == 0;

xx = x[.,1:COLS(x1)+kk];

 

pp = INV(xx'xx)*xx'dyy;                                                                                                                                             @ Sch�tzvektor p @

rres = dyy - xx*pp;                                                                                                                                                      @ Residuenvektor @

varres =(rres'*rres)/(ROWS(xx)-COLS(xx));                 @ Sch�tzung der Residualvarianz @

varpp = DIAG(rres'rres*INV(xx'xx)/(ROWS(xx)-COLS(xx)));          @ Sch�tzung der Varianz-Kovarianzmatrix @

tstat = pp./SQRT(varpp);                                  @ t-Statistiken der Parametersch�tzungen @

IF ABS(tstat[COLS(x1)+kk]) > 1.96; ampel = 1; ENDIF;   @ Signifkanzniveau 10%: 1.96 @

kk = kk-1;

ENDO;

kk = kk+1;

 

ENDIF;

 

IF ttb2 == 0;

ttheta = tstat[4];

ELSE;

ttheta = tstat[6];

ENDIF;

 

RETP(pp[1],tstat[1],ttheta,pp,rres,varres,tstat,kk);

 

ENDP;

 

PROC(8)=mio2break2b(yy,ttb1,ttb2,kkmax);

LOCAL t,mu,zeit,du1,du1verz,du2,du2verz,dtb1,dtb2,dt1,dt1verz,dt2,dt2verz,yyverz,dyy,x1,x,pp,rres;

LOCAL varres,varpp,tstat,kk,ampel,xx,ttheta;

 

t=ROWS(yy);

mu=ONES(t,1);              @ Absolutglied @

zeit=SEQA(1,1,t);          @ Zeittrend @

du1=ZEROS(t,1);             @ Dummy @

du1[ttb1+1:t,1]=ONES(t-ttb1,1);

du1verz=lagn(du1,1);                                     @ Dummy verz�gert DU_t-1 @

dtb1=ZEROS(t,1);            @ Impulsdummy @

dtb1[ttb1+1:ttb1+1,1]=1;

dt1 = ZEROS(t,1);

dt1[ttb1+1:t,1] = SEQA(1,1,(t-ttb1));

dt1verz=lagn(dt1,1);

yyverz=lagn(yy,1);

dyy=yy-yyverz;

 

IF ttb2 == 0;

x1=yyverz~mu~du1verz~dtb1~dt1verz~zeit;

ELSE;

du2=ZEROS(t,1);             @ Dummy @

du2[ttb2+1:t,1]=ONES(t-ttb2,1);

du2verz=lagn(du2,1);                                  @ Dummy verz�gert DU_t-1 @

dtb2=ZEROS(t,1);            @ Impulsdummy @

dtb2[ttb2+1:ttb2+1,1]=1;

dt2 = ZEROS(t,1);

dt2[ttb2+1:t,1] = SEQA(1,1,(t-ttb2));

dt2verz=lagn(dt2,1);

 

x1=yyverz~mu~du1verz~du2verz~dtb1~dtb2~dt1verz~dt2verz~zeit;

ENDIF;

 

IF kkmax == 0;

 

x = TRIMR(x1,1,0);

dyy = TRIMR(dyy,1,0);

pp = INV(x'x)*x'dyy;                                                                                                                                      @ Sch�tzvektor p @

rres = dyy - x*pp;                                                                                                                                                           @ Residuenvektor @

varres =(rres'*rres)/(ROWS(x)-COLS(x));                 @ Sch�tzung der Residualvarianz @

varpp = DIAG(rres'rres*INV(x'x)/(ROWS(x)-COLS(x)));     @ Sch�tzung der Varianz-Kovarianzmatrix @

tstat = pp./SQRT(varpp);                                  @ t-Statistiken der Parametersch�tzungen @

kk = 0;

ELSE;

 

x = TRIMR(x1~SHIFTR(dyy',SEQA(1,1,kkmax),0)',kkmax+1,0);

dyy = TRIMR(dyy,kkmax+1,0);

 

kk = kkmax;

ampel = 0;

 

DO WHILE kk >= 0 AND ampel == 0;

xx = x[.,1:COLS(x1)+kk];

 

pp = INV(xx'xx)*xx'dyy;                                                                                                                                             @ Sch�tzvektor p @

rres = dyy - xx*pp;                                                                                                                                                      @ Residuenvektor @

varres =(rres'*rres)/(ROWS(xx)-COLS(xx));                 @ Sch�tzung der Residualvarianz @

varpp = DIAG(rres'rres*INV(xx'xx)/(ROWS(xx)-COLS(xx)));          @ Sch�tzung der Varianz-Kovarianzmatrix @

tstat = pp./SQRT(varpp);                                  @ t-Statistiken der Parametersch�tzungen @

IF ABS(tstat[COLS(x1)+kk]) > 1.96; ampel = 1; ENDIF;   @ Signifkanzniveau 10%: 1.96 @

kk = kk-1;

ENDO;

kk = kk+1;

 

ENDIF;

 

IF ttb2 == 0;

ttheta = tstat[4];

ELSE;

ttheta = tstat[6];

ENDIF;

 

RETP(pp[1],tstat[1],ttheta,pp,rres,varres,tstat,kk);

 

ENDP;

 

proc diff(x,k) ;

if ( k == 0) ;

retp(x) ;

endif ;

retp(trimr(x,k,0)-trimr(lagn(x,k),k,0)) ;

endp ;

 

proc lagn(x,n);

local y;

y = shiftr(x', n, (miss(0, 0))');

retp(y');

endp;

9 Answers



0



This code assumes that there is not a variable name in the file. The first thing it does is load in EVERY line from the file:

load yy[68,2]=C:\Users\\hafizah\Desktop\hp.txt;           @ change path settings and yy[here include # of observations ,2]@

If the first row of the file contains variable name headers, then the yy matrix will have the variable names in the numeric elements of yy[1:2,1]. The next thing that the program does is to graph all elements that were read in of the time series, with this command:

XY(yy[.,1],yy[.,2]);

The variable names would cause problems if you tried to graph them. So, if this graph looks like what you expect to see from your data, then you are probably inputing the data correctly.

I will see if I have some time to look more into this later.

aptech

1,773


0



Thank you very much for your feedback. I really appreciate your help as this is the first time Im using this software for analysis. I have tried to run the test for the HP variable and I obtain the following result

1) 1st column the year and the 2nd column the observations

Series: Log(.)
Sample:2.178e-076 2.015e+005
# observations: 68.00
maximum lag 5.000
trimming factor 0.2000
program: popp2break.prg
*******************************
break date floor, effective tau 14.00 2.002e+005 0.2059
break date ceiling, effective (1-tau) 54.00 2.012e+005 0.7941
*******************************************
****************
output
****************
model type M0
first break 18.00 2.003e+005 0.2647
second break 49.00 2.011e+005 0.7206
phi = rho-1 -0.007797
t value -0.5144
optimal lag 0.0000
variance 0.0001040
------
coeff tstat
yyverz constant du1verz du2verz dtb1 dtb2

-0.007797 -0.5144
0.04127 0.5870
0.006272 1.400
0.01468 2.042
0.02733 2.570
0.03093 2.881
output
****************
model type M1
first break 39.00 2.008e+005 0.5735
second break 49.00 2.011e+005 0.7206
phi = rho-1 -0.08497
t value -2.983
optimal lag 0.0000
variance 8.357e-005
------
coeff tstat
yyverz constant du1verz du2verz dtb1 dtb2 time

-0.08497 -2.983
0.3872 3.030
0.003502 0.7130
0.01983 2.894
-0.02939 -3.044
0.02618 2.692
0.0009555 2.883
output
****************
model type M2
first break 39.00 2.008e+005 0.5735
second break 49.00 2.011e+005 0.7206
phi = rho-1 -0.2916
t value -2.652
optimal lag 0.0000
variance 7.781e-005
------
coeff tstat
yylagged constant du1lagged du2lagged dtb1 dtb2 dt1verz dt2verz time

-0.2916 -2.652
1.325 2.658
-0.009895 -1.290
0.01728 2.017
-0.02461 -2.562
0.01883 1.722
0.003535 2.409
-0.0003207 -0.2263
0.002584 2.854

 

2) Only 1 column - the observation only. I remove the date column

Series: Log(.)
Sample:1.314e-047 5.435
# observations: 68.00
maximum lag 5.000
trimming factor 0.2000
program: popp2break.prg
*******************************
break date floor, effective tau 14.00 4.757 0.2059
break date ceiling, effective (1-tau) 54.00 4.878 0.7941
*******************************************
****************
output
****************
model type M0
first break 34.00 5.435 0.5000
second break 38.00 4.619 0.5588
phi = rho-1 0.0009371
t value 0.06047
optimal lag 4.000
variance 0.0001672
------
coeff tstat
yyverz constant du1verz du2verz dtb1 dtb2

0.0009371 0.06047
0.005737 0.08118
0.2810 4.392
-0.2814 -4.396
-0.9143 -62.25
-0.5497 -3.820
0.3002 3.984
0.2924 3.921
0.3370 4.436
-0.2608 -1.977
output
****************
model type M1
first break 34.00 5.435 0.5000
second break 38.00 4.619 0.5588
phi = rho-1 -0.09195
t value -3.030
optimal lag 3.000
variance 0.0001535
------
coeff tstat
yyverz constant du1verz du2verz dtb1 dtb2 time

-0.09195 -3.030
0.4091 3.079
0.2089 3.281
-0.2890 -4.710
-0.9072 -63.95
-0.3119 -4.748
0.002348 2.984
0.3024 4.192
0.2961 4.145
0.3343 4.609
output
****************
model type M2
first break 34.00 5.435 0.5000
second break 39.00 4.611 0.5735
phi = rho-1 -0.06975
t value -2.180
optimal lag 3.000
variance 0.0001777
------
coeff tstat
yylagged constant du1lagged du2lagged dtb1 dtb2 dt1verz dt2verz time

-0.06975 -2.180
0.3111 2.225
0.4002 3.162
0.1367 3.453
-0.9109 -58.48
0.1339 3.212
-0.1213 -3.781
0.1213 3.781
0.002044 2.271
0.3618 3.444
0.2286 3.193
0.1338 3.315

 

What makes me feel doubt about this result is because when I run the test on all my variables (7 variables), all my result shows "0" optimal lag when I put 2 column in the .txt file. But when I remove the date column and run it, the optimal lag changes to number between 1 to 5. I've tried to refer to some other paper which use this test but not much paper has shown to obtain "0" lag. So I am a bit confused whether I should remain the date column or not. But based on the original code by the author, he did mention that the first column is the date. I hope you can help to clear my confusion regarding this matter. I really appreciate your kind help.



0



There seems to be problem in your output. I think your printed output should look like this below. Notice that the Sample: numbers look like the first and last dates in the data you posted above. However, in the output you posted, the numbers next to Sample: do not look like the dates in your data.

******************************************
date:  9/28/16  time:  19:10:19
*******************************************
Series: Log(.)
Sample:1.999e+05 2.015e+05
# observations:   68.00 
maximum lag    5.000 
trimming factor   0.2000 
program: popp2break.prg
*******************************
break date floor, effective tau    14.00 2.002e+05   0.2059 
break date ceiling, effective (1-tau)    54.00 2.012e+05   0.7941 
*******************************************
****************
output
****************
model type M0
first break     28.00 2.005e+05   0.4118 
second break    34.00 2.007e+05   0.5000 
phi = rho-1  -0.0002128 
t value      -0.02124 
optimal lag     0.000 
variance     0.0001977 
------
coeff tstat  
yyverz constant du1verz du2verz dtb1 dtb2

-0.0002128 -0.02124 
 0.02230   0.1796 
-0.007548  -0.9596 
0.007831   0.9842 
 0.07282    4.990 
 0.04788    3.107 
output
****************
model type M1
first break     28.00 2.005e+05   0.4118 
second break    34.00 2.007e+05   0.5000 
phi = rho-1   -0.1441 
t value        -2.215 
optimal lag     0.000 
variance     0.0001845 
------
coeff tstat  
yyverz constant du1verz du2verz dtb1 dtb2 time

 -0.1441   -2.215 
   1.747    2.239 
-0.002609  -0.3297 
0.002863   0.3578 
 0.07095    5.024 
 0.04338    2.888 
0.003275    2.237 
output
****************
model type M2
first break     28.00 2.005e+05   0.4118 
second break    35.00 2.007e+05   0.5147 
phi = rho-1  -0.07290 
t value       -0.8420 
optimal lag     0.000 
variance     0.0001859 
------
coeff tstat  
yylagged constant du1lagged du2lagged dtb1 dtb2 dt1verz dt2verz time

-0.07290  -0.8420 
  0.8887   0.8513 
-0.02251   -1.303 
-0.01889   -1.387 
 0.06873    4.527 
-0.06011   -3.137 
0.005987    1.784 
-0.006514   -1.891 
0.001928    1.144 

I think your data is being read in incorrectly when you remove the date column.

You can check the value of yyy that the program thinks it has by adding this print statement:

//Diagnostic print
print "yyy = " yyy;
end;

Right between these lines:

"break date ceiling, effective (1-tau) " tbober~yy[tbober,1]~tbober/ttt;

"*******************************************";

 //Insert print statement from block above here

/************** Model 0 *****************/

Then you can run the code both ways to see what it has for yyy. You could also use the GAUSS debugger to step through the code line-by-line. It is very helpful and makes it easy to view your variables as they are changed.

aptech

1,773


0



I am using a quarterly data. I am not sure how to specify the data in the .text file so that the Gauss software can read my data correctly.



0



May I know how can I run the Gauss debugger. I am sorry I have no idea about this. I really2 appreciate your feedback



0



If you want to know how GAUSS is reading in your data, you can enter the command that you are using in the program input/output window:

load yy[68,2]=C:\Users\\hafizah\Desktop\hp.txt;

and then either print the value of yy to the screen with the command:

print yy;

or you can go to the Data Page (Click the Data tab on the left side of the main GAUSS application) and double-click on yy in the Symbols list window on the left side of the Data Page.

I would also recommend that you use the function csvReadM instead of load. I think the behavior of csvReadM is more intuitive than load. Here are some examples of usage:

//Read all rows and all columns starting after
//the header line (i.e. line 2 of the file) of a file where the
//data is separated by commas
yy = csvReadM("mydatafile.txt", 2);
//Read a comma separated file
//but skip the first row and the first column
yy = csvReadM("mydatafile.txt", 2,2);
//Read a SPACE SEPARATED file
//but skip the first row and the first column
yy = csvReadM("mydatafile.txt", 2,2, " ");

The help page for csvReadM has more complete information and examples.

aptech

1,773


0



At the most basic level, you can use the debugger by clicking the Debug button (looks like a ladybug on the main GAUSS toolbar) instead of the Run button. Then the program will be started in debug mode and you will see some buttons that allow you to step line-by-line and look at the status of variables, etc.

Take a look at the documentation for more details Help Tab->User Guide->Using the GAUSS Debugger. Please feel free to post any questions you have about using the debugger. However, it would be best to post these as a new question on the forum.

aptech

1,773


0



Using the hp data above, when I run the diagnostic print, I get the following result:

 

//Diagnostic print
print "yy = " yy;
end;

yy =
2.178e-076 12.16
1.999e+005 12.16
1.999e+005 12.16
1.999e+005 12.16
2.000e+005 12.17
2.000e+005 12.20
2.000e+005 12.22
2.000e+005 12.25
2.001e+005 12.26
2.001e+005 12.27
2.001e+005 12.30
2.001e+005 12.32
2.002e+005 12.28
2.002e+005 12.30
2.002e+005 12.33
2.002e+005 12.35
2.003e+005 12.37
2.003e+005 12.39
2.003e+005 12.42
2.003e+005 12.44
2.004e+005 12.46
2.004e+005 12.48
2.004e+005 12.51
2.004e+005 12.54
2.005e+005 12.56
2.005e+005 12.58
2.005e+005 12.61
2.005e+005 12.63
2.006e+005 12.72
2.006e+005 12.74
2.006e+005 12.75
2.006e+005 12.76
2.007e+005 12.76
2.007e+005 12.79
2.007e+005 12.85
2.007e+005 12.83
2.008e+005 12.84
2.008e+005 12.85
2.008e+005 12.84
2.008e+005 12.89
2.009e+005 12.92
2.009e+005 12.94
2.009e+005 12.96
2.009e+005 12.99
2.010e+005 13.02
2.010e+005 13.05
2.010e+005 13.08
2.010e+005 13.11
2.011e+005 13.13
2.011e+005 13.15
2.011e+005 13.18
2.011e+005 13.21
2.012e+005 13.23
2.012e+005 13.27
2.012e+005 13.29
2.012e+005 13.32
2.013e+005 13.34
2.013e+005 13.36
2.013e+005 13.38
2.013e+005 13.39
2.014e+005 13.40
2.014e+005 13.42
2.014e+005 13.44
2.014e+005 13.45
2.015e+005 13.48
2.015e+005 13.48
2.015e+005 13.49
2.015e+005 13.50

The data is correct expect for the starting sample. I even tried using yearly data but this problem still happen for the starting sample. Do I need to change anything else in the coding so that the sample will follow the data as in my .text file



0



Thank you sir. I've check the data tab, for the first row, Gauss read my data as 0.000000. Only the second row onwards it is correct as in the file. Can I directly edit the data to 1999.01?

Your Answer

9 Answers

0

This code assumes that there is not a variable name in the file. The first thing it does is load in EVERY line from the file:

load yy[68,2]=C:\Users\\hafizah\Desktop\hp.txt;           @ change path settings and yy[here include # of observations ,2]@

If the first row of the file contains variable name headers, then the yy matrix will have the variable names in the numeric elements of yy[1:2,1]. The next thing that the program does is to graph all elements that were read in of the time series, with this command:

XY(yy[.,1],yy[.,2]);

The variable names would cause problems if you tried to graph them. So, if this graph looks like what you expect to see from your data, then you are probably inputing the data correctly.

I will see if I have some time to look more into this later.

0

Thank you very much for your feedback. I really appreciate your help as this is the first time Im using this software for analysis. I have tried to run the test for the HP variable and I obtain the following result

1) 1st column the year and the 2nd column the observations

Series: Log(.)
Sample:2.178e-076 2.015e+005
# observations: 68.00
maximum lag 5.000
trimming factor 0.2000
program: popp2break.prg
*******************************
break date floor, effective tau 14.00 2.002e+005 0.2059
break date ceiling, effective (1-tau) 54.00 2.012e+005 0.7941
*******************************************
****************
output
****************
model type M0
first break 18.00 2.003e+005 0.2647
second break 49.00 2.011e+005 0.7206
phi = rho-1 -0.007797
t value -0.5144
optimal lag 0.0000
variance 0.0001040
------
coeff tstat
yyverz constant du1verz du2verz dtb1 dtb2

-0.007797 -0.5144
0.04127 0.5870
0.006272 1.400
0.01468 2.042
0.02733 2.570
0.03093 2.881
output
****************
model type M1
first break 39.00 2.008e+005 0.5735
second break 49.00 2.011e+005 0.7206
phi = rho-1 -0.08497
t value -2.983
optimal lag 0.0000
variance 8.357e-005
------
coeff tstat
yyverz constant du1verz du2verz dtb1 dtb2 time

-0.08497 -2.983
0.3872 3.030
0.003502 0.7130
0.01983 2.894
-0.02939 -3.044
0.02618 2.692
0.0009555 2.883
output
****************
model type M2
first break 39.00 2.008e+005 0.5735
second break 49.00 2.011e+005 0.7206
phi = rho-1 -0.2916
t value -2.652
optimal lag 0.0000
variance 7.781e-005
------
coeff tstat
yylagged constant du1lagged du2lagged dtb1 dtb2 dt1verz dt2verz time

-0.2916 -2.652
1.325 2.658
-0.009895 -1.290
0.01728 2.017
-0.02461 -2.562
0.01883 1.722
0.003535 2.409
-0.0003207 -0.2263
0.002584 2.854

 

2) Only 1 column - the observation only. I remove the date column

Series: Log(.)
Sample:1.314e-047 5.435
# observations: 68.00
maximum lag 5.000
trimming factor 0.2000
program: popp2break.prg
*******************************
break date floor, effective tau 14.00 4.757 0.2059
break date ceiling, effective (1-tau) 54.00 4.878 0.7941
*******************************************
****************
output
****************
model type M0
first break 34.00 5.435 0.5000
second break 38.00 4.619 0.5588
phi = rho-1 0.0009371
t value 0.06047
optimal lag 4.000
variance 0.0001672
------
coeff tstat
yyverz constant du1verz du2verz dtb1 dtb2

0.0009371 0.06047
0.005737 0.08118
0.2810 4.392
-0.2814 -4.396
-0.9143 -62.25
-0.5497 -3.820
0.3002 3.984
0.2924 3.921
0.3370 4.436
-0.2608 -1.977
output
****************
model type M1
first break 34.00 5.435 0.5000
second break 38.00 4.619 0.5588
phi = rho-1 -0.09195
t value -3.030
optimal lag 3.000
variance 0.0001535
------
coeff tstat
yyverz constant du1verz du2verz dtb1 dtb2 time

-0.09195 -3.030
0.4091 3.079
0.2089 3.281
-0.2890 -4.710
-0.9072 -63.95
-0.3119 -4.748
0.002348 2.984
0.3024 4.192
0.2961 4.145
0.3343 4.609
output
****************
model type M2
first break 34.00 5.435 0.5000
second break 39.00 4.611 0.5735
phi = rho-1 -0.06975
t value -2.180
optimal lag 3.000
variance 0.0001777
------
coeff tstat
yylagged constant du1lagged du2lagged dtb1 dtb2 dt1verz dt2verz time

-0.06975 -2.180
0.3111 2.225
0.4002 3.162
0.1367 3.453
-0.9109 -58.48
0.1339 3.212
-0.1213 -3.781
0.1213 3.781
0.002044 2.271
0.3618 3.444
0.2286 3.193
0.1338 3.315

 

What makes me feel doubt about this result is because when I run the test on all my variables (7 variables), all my result shows "0" optimal lag when I put 2 column in the .txt file. But when I remove the date column and run it, the optimal lag changes to number between 1 to 5. I've tried to refer to some other paper which use this test but not much paper has shown to obtain "0" lag. So I am a bit confused whether I should remain the date column or not. But based on the original code by the author, he did mention that the first column is the date. I hope you can help to clear my confusion regarding this matter. I really appreciate your kind help.

0

There seems to be problem in your output. I think your printed output should look like this below. Notice that the Sample: numbers look like the first and last dates in the data you posted above. However, in the output you posted, the numbers next to Sample: do not look like the dates in your data.

******************************************
date:  9/28/16  time:  19:10:19
*******************************************
Series: Log(.)
Sample:1.999e+05 2.015e+05
# observations:   68.00 
maximum lag    5.000 
trimming factor   0.2000 
program: popp2break.prg
*******************************
break date floor, effective tau    14.00 2.002e+05   0.2059 
break date ceiling, effective (1-tau)    54.00 2.012e+05   0.7941 
*******************************************
****************
output
****************
model type M0
first break     28.00 2.005e+05   0.4118 
second break    34.00 2.007e+05   0.5000 
phi = rho-1  -0.0002128 
t value      -0.02124 
optimal lag     0.000 
variance     0.0001977 
------
coeff tstat  
yyverz constant du1verz du2verz dtb1 dtb2

-0.0002128 -0.02124 
 0.02230   0.1796 
-0.007548  -0.9596 
0.007831   0.9842 
 0.07282    4.990 
 0.04788    3.107 
output
****************
model type M1
first break     28.00 2.005e+05   0.4118 
second break    34.00 2.007e+05   0.5000 
phi = rho-1   -0.1441 
t value        -2.215 
optimal lag     0.000 
variance     0.0001845 
------
coeff tstat  
yyverz constant du1verz du2verz dtb1 dtb2 time

 -0.1441   -2.215 
   1.747    2.239 
-0.002609  -0.3297 
0.002863   0.3578 
 0.07095    5.024 
 0.04338    2.888 
0.003275    2.237 
output
****************
model type M2
first break     28.00 2.005e+05   0.4118 
second break    35.00 2.007e+05   0.5147 
phi = rho-1  -0.07290 
t value       -0.8420 
optimal lag     0.000 
variance     0.0001859 
------
coeff tstat  
yylagged constant du1lagged du2lagged dtb1 dtb2 dt1verz dt2verz time

-0.07290  -0.8420 
  0.8887   0.8513 
-0.02251   -1.303 
-0.01889   -1.387 
 0.06873    4.527 
-0.06011   -3.137 
0.005987    1.784 
-0.006514   -1.891 
0.001928    1.144 

I think your data is being read in incorrectly when you remove the date column.

You can check the value of yyy that the program thinks it has by adding this print statement:

//Diagnostic print
print "yyy = " yyy;
end;

Right between these lines:

"break date ceiling, effective (1-tau) " tbober~yy[tbober,1]~tbober/ttt;

"*******************************************";

 //Insert print statement from block above here

/************** Model 0 *****************/

Then you can run the code both ways to see what it has for yyy. You could also use the GAUSS debugger to step through the code line-by-line. It is very helpful and makes it easy to view your variables as they are changed.

0

I am using a quarterly data. I am not sure how to specify the data in the .text file so that the Gauss software can read my data correctly.

0

May I know how can I run the Gauss debugger. I am sorry I have no idea about this. I really2 appreciate your feedback

0

If you want to know how GAUSS is reading in your data, you can enter the command that you are using in the program input/output window:

load yy[68,2]=C:\Users\\hafizah\Desktop\hp.txt;

and then either print the value of yy to the screen with the command:

print yy;

or you can go to the Data Page (Click the Data tab on the left side of the main GAUSS application) and double-click on yy in the Symbols list window on the left side of the Data Page.

I would also recommend that you use the function csvReadM instead of load. I think the behavior of csvReadM is more intuitive than load. Here are some examples of usage:

//Read all rows and all columns starting after
//the header line (i.e. line 2 of the file) of a file where the
//data is separated by commas
yy = csvReadM("mydatafile.txt", 2);
//Read a comma separated file
//but skip the first row and the first column
yy = csvReadM("mydatafile.txt", 2,2);
//Read a SPACE SEPARATED file
//but skip the first row and the first column
yy = csvReadM("mydatafile.txt", 2,2, " ");

The help page for csvReadM has more complete information and examples.

0

At the most basic level, you can use the debugger by clicking the Debug button (looks like a ladybug on the main GAUSS toolbar) instead of the Run button. Then the program will be started in debug mode and you will see some buttons that allow you to step line-by-line and look at the status of variables, etc.

Take a look at the documentation for more details Help Tab->User Guide->Using the GAUSS Debugger. Please feel free to post any questions you have about using the debugger. However, it would be best to post these as a new question on the forum.

0

Using the hp data above, when I run the diagnostic print, I get the following result:

 

//Diagnostic print
print "yy = " yy;
end;

yy =
2.178e-076 12.16
1.999e+005 12.16
1.999e+005 12.16
1.999e+005 12.16
2.000e+005 12.17
2.000e+005 12.20
2.000e+005 12.22
2.000e+005 12.25
2.001e+005 12.26
2.001e+005 12.27
2.001e+005 12.30
2.001e+005 12.32
2.002e+005 12.28
2.002e+005 12.30
2.002e+005 12.33
2.002e+005 12.35
2.003e+005 12.37
2.003e+005 12.39
2.003e+005 12.42
2.003e+005 12.44
2.004e+005 12.46
2.004e+005 12.48
2.004e+005 12.51
2.004e+005 12.54
2.005e+005 12.56
2.005e+005 12.58
2.005e+005 12.61
2.005e+005 12.63
2.006e+005 12.72
2.006e+005 12.74
2.006e+005 12.75
2.006e+005 12.76
2.007e+005 12.76
2.007e+005 12.79
2.007e+005 12.85
2.007e+005 12.83
2.008e+005 12.84
2.008e+005 12.85
2.008e+005 12.84
2.008e+005 12.89
2.009e+005 12.92
2.009e+005 12.94
2.009e+005 12.96
2.009e+005 12.99
2.010e+005 13.02
2.010e+005 13.05
2.010e+005 13.08
2.010e+005 13.11
2.011e+005 13.13
2.011e+005 13.15
2.011e+005 13.18
2.011e+005 13.21
2.012e+005 13.23
2.012e+005 13.27
2.012e+005 13.29
2.012e+005 13.32
2.013e+005 13.34
2.013e+005 13.36
2.013e+005 13.38
2.013e+005 13.39
2.014e+005 13.40
2.014e+005 13.42
2.014e+005 13.44
2.014e+005 13.45
2.015e+005 13.48
2.015e+005 13.48
2.015e+005 13.49
2.015e+005 13.50

The data is correct expect for the starting sample. I even tried using yearly data but this problem still happen for the starting sample. Do I need to change anything else in the coding so that the sample will follow the data as in my .text file

0

Thank you sir. I've check the data tab, for the first row, Gauss read my data as 0.000000. Only the second row onwards it is correct as in the file. Can I directly edit the data to 1999.01?


You must login to post answers.

Have a Specific Question?

Get a real answer from a real person

Need Support?

Get help from our friendly experts.

Try GAUSS for 14 days for FREE

See what GAUSS can do for your data

© Aptech Systems, Inc. All rights reserved.

Privacy Policy