could anyone answer my last question posted days ago??

I have been using the cmlmt library to estimate an interest model, the "CKLS" model by K. C. Chan, G. Andrew Karolyi, Francis A. Longstaff and Anthony B. Sanders (a 1992 paper).

*****CKLS Model: dr = a*(b-r)*dt+sigma*r^k*dw*****

As I presented days ago, I coded the right loglikelihood function (after many reviews), and I also provided the Gauss the gradient and hessian, the program of this part is right as Gauss told after engaging the c0.gradCheck. BUT, the result Gauss gives is REALLY unreasonable.

Params[2,1], the parameter "b" of the above CKLS model, should be some kind of a long-run mean interest rate, thus with the magnitude of 0.03, so its really unreasonable on a "43.3941". Params[3,1], "sigma", couldnt be such a large number thought its constrained.

Could anyone tell me where the error might located at? And, how could i code with Gauss utilizing cmlmt library to estimate the extended models--regime switching model with CKLS specification inside each regime?

THANKS FOR ALL YOUR PRECIOUS ADVICE!

********************************************************************************

return code = 0
normal convergence

Log-likelihood 9966.54
Number of cases 2058

Covariance of the parameters computed by the following method:
ML covariance matrix
Parameters Estimates Std. err. Est./s.e. Prob. Gradient
---------------------------------------------------------------------
params[1,1] 0.0003    0.0071      0.045     0.9645    0.1369
params[2,1] 43.3941    974.7281    0.045    0.9645     0.0000
params[3,1] 50.0000     .                      .             0.0000     0.1226
params[4,1] 1.9391       0.0041     477.249   0.0000    -0.0008

Correlation matrix of the parameters
1             -0.99987408         .           -0.00080509574
-0.99987408      1               .                0.00065732412
.                                 .                .                 .
-0.0008050958         0.00065732417             .            1

Wald Confidence Limits

0.95 confidence limits
Parameters Estimates Lower Limit Upper Limit Gradient
----------------------------------------------------------------------
params[1,1] 0.0003           -0.0135       0.0142      0.1369
params[2,1] 43.3941         -1868.1644    1954.9525      0.0000
params[3,1] 50.0000        .                        .                          0.1226
params[4,1] 1.9391           1.9311            1.9471             -0.0008

Number of iterations 1000
Minutes to convergence 0.21132

79636975.79535         576.89366            0.00548          20464.99581
576.89366                      0.00418                  0.00000         0.14808
0.00548                             0.00000                1.65375           -314.76628
20464.99581                  0.14808                  -314.76628      60579.31980

*******************************************************************

 

12 Answers



0



I would need to see your command file to see how you are placing constraints.  The third parameter is on a constraint boundary, and the second parameter is clearly not being constrained to be less than 1.0.  Post you command file and I'll be able to comment.



0



Sorry, I dont know how to upload file here, so I directly write my code below:
library cmlmt;

// contains the structure definitions
#include cmlmt.sdf

// load data
r1dayL = xlsReadM("SHIBOR.xlsx","b2:b2059",1,0)/100;
r1dayF = xlsReadM("SHIBOR.xlsx","b3:b2060",1,0)/100;

struct DS d0;
// create a default data structure
d0 = reshape(dsCreate,2,1);
d0[1].datamatrix = r1dayF;
d0[2].datamatrix = r1dayL;

struct PV p0;
// creates a default parameter structure
p0 = pvCreate;
p0 = pvPackm(p0,1|.0242|1|.5,"params",1|1|1|1);

struct cmlmtControl c0;
// creates a default structure
c0 = cmlmtControlCreate;
c0.Bounds = { -50 50,
-50 50,
0.0001 50,
0 5 };
c0.NumObs = 2058;
c0.MaxIters = 10000;
c0.gradCheck = 1;

struct cmlmtResults out;

out = cmlmt(&cklslnl,p0,d0,c0);

// prints the results
call cmlmtprt(out);
print out.Hessian;

proc cklslnl(struct pv p, struct ds d, ind);
local params/*,a,b,sigma,k*/,r1dayF,r1dayL,aa,ab,aGradient,bGradient,sigmaGradient,kGradient,
mean,std,lnpdf,ba,bb,bc,bd,ca,cb,mhess;

struct modelResults mm;

params = pvUnpack(p,"params");
// a = params[1];
// b = params[2];
// sigma = params[3];
// k = params[4];
r1dayF = d[1].datamatrix;
r1dayL = d[2].datamatrix;

mean = r1dayL+params[1]*(params[2]-r1dayL)/250;
std = params[3]*r1dayL^params[4]/250^.5;

if ind[1];
lnpdf = -(r1dayF-mean).^2 ./(2*(std.^2))-ln((2*pi)^0.5.*std);
mm.function = sumc(lnpdf);
endif;

aa = (r1dayF-mean)./(std.^2);
ab = (r1dayF-mean).^2 ./(std.^3)-1 ./std;

if ind[2];
aGradient = sumc(aa.*(params[2]-r1dayL)/250);
bGradient = sumc(aa*params[1]/250);
sigmaGradient = sumc(ab.*r1dayL.^params[4]/250^.5);
kGradient = sumc(ab*params[3]/(250^.5).*r1dayL.^params[4].*ln(r1dayL));

mm.gradient = aGradient~bGradient~sigmaGradient~kGradient;
endif;

ba = (params[2]-r1dayL)/250; // mean_Gradient_a
bb = params[1]/250; // mean_Gradient_b
bc = r1dayL.^params[4]/(250^.5);// std_Gradient_sigma
bd = params[3]*r1dayL.^params[4]/(250^.5).*ln(r1dayL); // std_Gradient_k
ca = 2*(r1dayF-mean)./(std.^3);
cb = -3*(r1dayF-mean).^2 ./(std.^4) + 1 ./(std.^2);

let mhess[4,4];

if ind[3];
mhess[1,1] = sumc(-1 ./std.^2 .*(ba).^2);
mhess[2,1] = sumc(-1 ./std.^2 .*ba.*bb + aa/250);
mhess[3,1] = sumc(-ca.*ba.*bc);
mhess[4,1] = sumc(-ca.*ba.*bd);
mhess[2,2] = sumc(-1 ./std.^2 .*bb.^2);
mhess[3,2] = sumc(-ca.*bb.*bc);
mhess[4,2] = sumc(-ca.*bb.*bd);
mhess[3,3] = sumc(cb.*bc.^2);
mhess[4,3] = sumc(cb.*bc.*bd + ab.*r1dayL.^params[4]/(250^.5).*ln(r1dayL));
mhess[4,4] = sumc(cb.*bd.^2 + ab.*bd.*ln(r1dayL));
mhess[1,2] = mhess[2,1];
mhess[1,3] = mhess[3,1]; mhess[2,3] = mhess[3,2];
mhess[1,4] = mhess[4,1]; mhess[2,4] = mhess[4,2]; mhess[3,4] = mhess[4,3];

mm.hessian = mhess;
endif;
retp(mm);
endp;
When loading data, I divide the numbers by 100 to change the scale of the interest rate observations 2.34 to percents like 0.0234.
When I try to modify my code to estimate a regime switch CKLS, I dont know how though I know the loglikelihood function, could you please give me some advice?
Thank U very much.



0



@RonSchoenberg, I am so sorry to disturb you, do you know the error of my program?

Best Regards!



0



"Params[2,1], the parameter "b" of the above CKLS model, should be some kind of a long-run mean interest rate, thus with the magnitude of 0.03, so its really unreasonable on a "43.3941". Params[3,1], "sigma", couldnt be such a large number thought its constrained."

It may be unreasonable, but you are constraining it to the interval [-50,50].  If you feel that it should be on a much smaller interval you will need to set that constraint.  This is what you have:

c0.Bounds = { -50 50,
              -50 50,
             0.0001 50,
               0 5 };

If it's some kind of mean interest rate then it should be constrained to be positive, and if 0.03 is reasonable then perhaps you should constrain it to the unit interval:

c0.Bounds = { -50 50,
             0.0001 1,
             0.0001 50,
               0 5 };

 

Also, you should put the calculation of aa and ab inside an if statement so that it would be executed when only the function value is required:

 

if ind[2] or ind[3];

   aa = (r1dayF-mean)./(std.^2);
   ab = (r1dayF-mean).^2 ./(std.^3)-1 ./std;

endif;

 

That parameter 3 is on a very large constraint boundary, and that parameter 2 estimate is very large suggests two possibilities.   First, that the function is not being properly computed.  But that the gradient and Hessian calculation test out argues against that.   It is unlikely you made the same error twice in the same way.  However, it is still possible that you have an error in your function and it got carried into the gradient and Hessian calculations.

Second, your data may not fit you model.  You may be trying to pound a hexagonal peg into a pentagonal hole.   When you constrain parameter 2 to the unit interval you will find that the model will fit very poorly.  This is evident from the fact that the unconstrained estimate of parameter 2 is so far away from a reasonable estimate.   The constrained estimate will have a very large value for it's derivative indicating a poor fit.

Given you are correctly computing the log-likelihood, you may need to modify your model with new parameters to better reflect the processes underlying the generation of your data.

 

 

 

 

 

 

 

 



0



@RonSchoenberg.

Because the CIR model is a special case of the CKLS model with params[4] being a constant .5, so I tried the following test:

I changed the last parameter so as to set it equal to a constant .5 and the remaining part of the program unchanged, as a result, I got the perfect estimation that I had with CIR specification. So, I think the program of CKLS specification is correct. And I got something of interest.

I set the params[4] to a constant arrange from 0.1 to 2.5, like .1, .3, .5(CIR), .7, .9, 1, 1.5, 1.9, 2, 2.5. I found that the resultant maximum loglikelihood increases from .1 to 1.9, while decreasing thereafter. And, the fitness deteriorates from about 1, giving large std.err., Prob. and more wide confidence limits.

Regarding those findings, could I just quit the CKLS model for its bad fitness or there may still be some mistakes during the cmlmt solving?

And, when trying to extend the model to a Regime-Switching model, is there anything that need special attention? I have done the regime-switching extending with each regime being a Vasicek and CIR, the result is very good but the Probs of some estimators are too large at around .4 or more. Do you have any idea about what error may lead to this kind of high Prob?

Great Thanks!

Sincerely!



0



Just seconds ago, I changed the scale of my data from .0234 to 2.34, and then run the same program. The estimation result really surprised me:


 

Parameters Estimates Std. err. Est./s.e. Prob. Gradient
---------------------------------------------------------------------
params[1,1] -3.1646     1.0050         -3.149         0.0016     0.0000
params[2,1] 0.6889      0.1975          3.488          0.0005     0.0000
params[3,1] 0.6347     0.0230      27.566          0.0000    0.0000
params[4,1] 1.9912      0.0419        47.504         0.0000     0.0000


The change is really magnificent, the estimated model fits the data perfectly.

I dont know why this kind of change occurs.

 



0



Computer numbers behave differently from real numbers.   In most computers there are 16 decimal places of accuracy, and this amount of accuracy is achieved by storing the number as an abscissa, the 16 or so numbers, and an exponent:  a.aaaaaaaaaaaaaaa x 10^eee.   Subtraction/Addition is the most destructive operation on a computer because in order to perform it the exponents have to made the same.  If one number is very small and the other very large, places of accuracy are dumped off the end.  It is thus very important to construct your calculations to avoid this.

You had mentioned earlier that you had scaled your data, which I took to be a very good thing to do, but I haven't seen your data to check whether that was going to be enough.  I'm glad to see that your further scaling is producing the desired result.   This experience should reinforce your initial impulse to pay close attention to scaling the data.  In my own experience with Garch models I find scaling to be critical.



0



@RonSchoenberg.

Sorry, I dont know how to attach files here, so I paste part of my data instead. Data in the following table is raw, I divide them by 100 as I stated before in my orginal programs, through which I get perfect result fitting Vasicek and CIR models but very poor with CKLS until I remain the data unscaled.

I thought that I would get a good fit modelling CKLS, but failed. Still dont know where the error is.

2.1184
2.0990
2.0922
2.0955
2.0943
2.0889
2.0801
2.0746
2.0723
2.0667
2.0665
2.0573
2.0569
2.0592
2.1170
2.1568
2.1600
2.1885
2.2023
2.2057
2.2055
2.2312
2.3023
2.4264
2.5177
2.6198
2.8071
3.6074
3.5355
3.2292
3.1918
3.2645
3.4289
3.5158
3.5399
3.5101
3.2050
2.7744
2.5547
2.4161
2.3370
2.1803
2.1095
2.0640
2.0063
1.9919
1.9914
1.9181
1.8931
1.8589
1.7777
1.7247
1.6566
1.6130
1.6015
1.6311
1.6123
1.6431
1.7185
1.7107
1.7203
1.5651
1.5675
1.4310
1.4047
1.3721
1.3546
1.3519
1.3344
1.3051
1.2997
1.2809
1.3171
1.4236
1.4126
1.4197
1.5972
1.7079
1.7541
1.7889
1.7830
1.7838
1.7780
1.7602
1.7549
1.7593
1.7848
1.8392
1.8983
2.7851
4.0016
3.3685
4.0174
3.0934
1.9940
1.5143
1.8147
1.7920
1.6928
1.6526
1.5903
1.5194
1.4999
1.4683
1.4375
1.4153
1.3952
1.3800
1.3643
1.3535
1.3513
1.4088
1.5313
1.5551
1.6077
1.6487
1.6871
1.7466
1.7397
1.8016
1.8039
1.8057
1.8019
1.7945
1.7932
1.8913
1.9959
2.0004
2.0110
2.0310
2.0319
2.1153
2.4598
2.4650
2.4762
3.0403
3.3562
3.3857
3.5010
3.5071
3.0190
2.5129
1.9807
1.6710
1.6303
1.6507
1.6777
1.7021
1.9034
2.0008
1.9204
1.9246
1.9076
1.8502
1.8403
1.8086
1.8002
1.7905
1.8090
1.7965
1.7925
1.7967
1.7986
2.1005
2.0487
2.0477
2.0179
1.9837
1.9270
1.8869
1.8618
1.8285
1.8359
1.9088
2.5378
2.6040
2.6731
2.3304
2.1041
1.9006
1.7917
1.7241
1.6741
1.7784
1.7867
2.3693
2.3407
2.3226
2.3088
2.2445
2.6546
2.5790
2.5502
2.5338
2.4398
2.3874
2.1915
1.9627
1.8821
1.8488
1.8565
1.8441
1.8207
1.8510
1.8368
2.0376
2.0267
1.9919
1.9773
1.9671
1.9455
1.9807
1.9461
1.8763
1.8548
1.9909
1.9163
1.8084
1.7703
1.7381
1.7200
2.1618
2.1632
1.8430
1.7498
1.8873
1.9409
1.8820
1.8626
1.8197
1.8860
1.8122
1.8198
2.0446
2.2581
2.8184
2.8864
3.0194
4.0206
3.2914
3.1432
3.2635
3.7760
4.2018
3.8090
3.7234
3.9055
2.6627
2.1150
2.0026
1.9925
1.9981
1.9948
1.8974
1.9194
1.9120
1.8986
1.9072
1.8968
1.8977
1.9301
1.9517
2.2086
3.5629
5.0844
8.5282
2.9014
2.8379
2.8928
2.9161
2.3098
2.2977
2.2918
1.8649
1.9460
1.8902
1.8865
1.8839
1.8568
2.0909
1.9107
1.8938
1.8926
1.8948
1.9018
1.9025
1.8833
1.8921
1.8946
1.8809
1.8635
1.8468
1.8517
1.8720
1.8760
1.8973
1.9118
1.9135
1.9087
1.9052
1.9039
1.9030
1.8812
1.8955
1.8914
1.8928
1.8800
1.9094
1.9167
1.8421
1.8181


0



And, two more questions:

  1. I fit a GARCH model with the above data but Gauss said "Matrix Singular", the error might be that the variance part my come up with a negative one, though I set that variance to be 0.0001 when negative values occurs, and obviously this doesnt work.

    // GARCH Model: r(t) = a + b*r(t-1) + sigma(t)*w(t), where w(t) are i.i.d normal with (0,dt)
    // sigma(t+1)^2 = alpha0 + alpha1*{r(t)-[a + b*r(t-1)]}^2 + beta*sigma(t)^2
    // r(t)|r(t-1) is normal with ( a+b*r(t-1),sigma(t)^2 )
    library cmlmt;

    // contains the structure definitions
    #include cmlmt.sdf

    // load data
    r1dayL = xlsReadM("SHIBOR.xlsx","b2:b2059",1,0)/100;
    r1dayF = xlsReadM("SHIBOR.xlsx","b3:b2060",1,0)/100;

    struct DS d0;
    // create a default data structure
    d0 = reshape(dsCreate,2,1);
    d0[1].datamatrix = r1dayF;
    d0[2].datamatrix = r1dayL;

    struct PV p0;
    // creates a default parameter structure
    p0 = pvCreate;
    p0 = pvPackm(p0,0|1|0|.5|.5|1,"params",1|1|1|1|1|1); // garch parameters (a,b,alpha0,alpha1,beta,sigma1)

    struct cmlmtControl c0;
    // creates a default structure
    c0 = cmlmtControlCreate;
    c0.Bounds = { -50 50,
    -50 50,
    -50 50,
    -50 50,
    -50 50,
    0.0001 50 };
    c0.NumObs = 2057; // the first observation is assumed to be known
    c0.MaxIters = 10000;
    c0.gradCheck = 1;

    struct cmlmtResults out;

    out = cmlmt(&garchlnl,p0,d0,c0);

    // prints the results
    call cmlmtprt(out);
    print out.Hessian;

    proc garchlnl(struct PV p0, struct DS d0, ind);
    local params,r1dayF,r1dayL,n,m,mean,sigmaVector,dr,lnpdf;

    struct modelResults mm;
    params = pvUnpack(p0,"params");
    /* a = params[1];
    b = params[2];
    alpha0 = params[3];
    alpha1 = params[4];
    beta = params[5];
    sigma1 = params[6]; */

    r1dayF = d0[1].datamatrix;
    r1dayL = d0[2].datamatrix;
    n = rows(r1dayL);

    m = params[1] + params[2]*r1dayL;
    mean = m[2:n,1]; // the first observation is excluded when calculating the loglikelihood function
    // for the assumption that r1 is known

    sigmaVector = zeros(n-1,1); // the path of the sigma_squared
    dr = r1dayF - m;

    for i (1,n-2,1);
    sigmaVector[1] = params[6];
    sigmaVector[i+1] = params[3] + params[4]*dr[i]^2 + params[5]*sigmaVector[i];
    if sigmaVector[i+1] lt 0;
    sigmaVector[i+1] = .00001;
    endif;
    endfor;

    if ind[1];
    lnpdf = -.5*ln(2*pi*sigmaVector)-dr[2:n,1].^2 ./(2*sigmaVector);
    mm.function = sumc(lnpdf);
    endif;

    retp(mm);
    endp;


     

  2. when fitting a regime-switch model, like regime-switch-vasicek or regime-switch-cir, I get a result and the result of each fit seems very good at first glance. But the Probs of some estimator are too large, could you please tell me the reason might be and how to analyze the result?

    Covariance of the parameters computed by the following method:
    ML covariance matrix
    Parameters Estimates Std. err. Est./s.e. Prob. Gradient
    ---------------------------------------------------------------------
    paramsR1[1,1]      0.2406       0.1946     1.237      0.2162       0.0152
    paramsR1[2,1]      0.0334       0.0194     1.725      0.0845      0.0949
    paramsR1[3,1]      0.0048      0.0002     19.035   0.0000     0.0365
    paramsR1[4,1]      0.1343       0.0115      11.673    0.0000     0.0336
    paramsR2[1,1]      33.0410    4.8601      6.798     0.0000      0.0001
    paramsR2[2,1]     0.0308       0.0018     16.746    0.0000    -0.0079
    paramsR2[3,1]     0.1019        0.0031     33.217    0.0000     0.0431
    paramsR2[4,1]     0.2530       0.0206     12.258    0.0000       0.0039


Thanks, and your kindness is highly appreciated!

Sincerely!

Jeremy Lee

 



0



1)  Enforcing constraints using an if statement doesn't work because it isn't differentiable.  To constraint the variances to be greater than zero you need to use the nonlinear constraint feature of CMLMT.  Write a procedure that computes the variances and return them.  You will need to add a small value, as you've already realized, because the constraints in CMLMT are >= and you don't want them to be equal to zero.

Set the member of the cmlmtControl structure, ineqProc, to a pointer to your nonlinear constraint procedure:

c0.ineqProc = &VarConst;

 

And write the procedure:

 

proc VarConst(struct PV p, struct DS d);

local params,r1dayF,r1dayL,n,m,mean,sigmaVector,dr,lnpdf;

params = pvUnpack(p0,"params");
/* a = params[1];
b = params[2];
alpha0 = params[3];
alpha1 = params[4];
beta = params[5];
sigma1 = params[6]; */

r1dayF = d0[1].datamatrix;
r1dayL = d0[2].datamatrix;
n = rows(r1dayL);

m = params[1] + params[2]*r1dayL;
mean = m[2:n,1]; // the first observation is excluded when calculating the loglikelihood function
// for the assumption that r1 is known

sigmaVector = zeros(n-1,1); // the path of the sigma_squared
dr = r1dayF - m;

for i (1,n-2,1);
sigmaVector[1] = params[6];
sigmaVector[i+1] = params[3] + params[4]*dr[i]^2 + params[5]*sigmaVector[i];
endfor;

retp(sigmaVector + .0001);

endp;



0



Those parameters where the probability is greater than some small value that you select, .05, usually, you are not able to reject the null hypothesis that the population parameter is zero.

If those are not critical theoretical parameters, you should leave them in even though they are not statistically significant because they help out the estimation of the other parameters.

If they are theoretically important, then you might want to draw some theoretical conclusion that they might be zero in the population.   I say "might" because your estimate might have low power because of insufficient data for estimating that parameter, and that with good data you might find that it is statistically significant.



0



Very sad to say my estimation still not good.

Your Answer

12 Answers

0

I would need to see your command file to see how you are placing constraints.  The third parameter is on a constraint boundary, and the second parameter is clearly not being constrained to be less than 1.0.  Post you command file and I'll be able to comment.

0

Sorry, I dont know how to upload file here, so I directly write my code below:
library cmlmt;

// contains the structure definitions
#include cmlmt.sdf

// load data
r1dayL = xlsReadM("SHIBOR.xlsx","b2:b2059",1,0)/100;
r1dayF = xlsReadM("SHIBOR.xlsx","b3:b2060",1,0)/100;

struct DS d0;
// create a default data structure
d0 = reshape(dsCreate,2,1);
d0[1].datamatrix = r1dayF;
d0[2].datamatrix = r1dayL;

struct PV p0;
// creates a default parameter structure
p0 = pvCreate;
p0 = pvPackm(p0,1|.0242|1|.5,"params",1|1|1|1);

struct cmlmtControl c0;
// creates a default structure
c0 = cmlmtControlCreate;
c0.Bounds = { -50 50,
-50 50,
0.0001 50,
0 5 };
c0.NumObs = 2058;
c0.MaxIters = 10000;
c0.gradCheck = 1;

struct cmlmtResults out;

out = cmlmt(&cklslnl,p0,d0,c0);

// prints the results
call cmlmtprt(out);
print out.Hessian;

proc cklslnl(struct pv p, struct ds d, ind);
local params/*,a,b,sigma,k*/,r1dayF,r1dayL,aa,ab,aGradient,bGradient,sigmaGradient,kGradient,
mean,std,lnpdf,ba,bb,bc,bd,ca,cb,mhess;

struct modelResults mm;

params = pvUnpack(p,"params");
// a = params[1];
// b = params[2];
// sigma = params[3];
// k = params[4];
r1dayF = d[1].datamatrix;
r1dayL = d[2].datamatrix;

mean = r1dayL+params[1]*(params[2]-r1dayL)/250;
std = params[3]*r1dayL^params[4]/250^.5;

if ind[1];
lnpdf = -(r1dayF-mean).^2 ./(2*(std.^2))-ln((2*pi)^0.5.*std);
mm.function = sumc(lnpdf);
endif;

aa = (r1dayF-mean)./(std.^2);
ab = (r1dayF-mean).^2 ./(std.^3)-1 ./std;

if ind[2];
aGradient = sumc(aa.*(params[2]-r1dayL)/250);
bGradient = sumc(aa*params[1]/250);
sigmaGradient = sumc(ab.*r1dayL.^params[4]/250^.5);
kGradient = sumc(ab*params[3]/(250^.5).*r1dayL.^params[4].*ln(r1dayL));

mm.gradient = aGradient~bGradient~sigmaGradient~kGradient;
endif;

ba = (params[2]-r1dayL)/250; // mean_Gradient_a
bb = params[1]/250; // mean_Gradient_b
bc = r1dayL.^params[4]/(250^.5);// std_Gradient_sigma
bd = params[3]*r1dayL.^params[4]/(250^.5).*ln(r1dayL); // std_Gradient_k
ca = 2*(r1dayF-mean)./(std.^3);
cb = -3*(r1dayF-mean).^2 ./(std.^4) + 1 ./(std.^2);

let mhess[4,4];

if ind[3];
mhess[1,1] = sumc(-1 ./std.^2 .*(ba).^2);
mhess[2,1] = sumc(-1 ./std.^2 .*ba.*bb + aa/250);
mhess[3,1] = sumc(-ca.*ba.*bc);
mhess[4,1] = sumc(-ca.*ba.*bd);
mhess[2,2] = sumc(-1 ./std.^2 .*bb.^2);
mhess[3,2] = sumc(-ca.*bb.*bc);
mhess[4,2] = sumc(-ca.*bb.*bd);
mhess[3,3] = sumc(cb.*bc.^2);
mhess[4,3] = sumc(cb.*bc.*bd + ab.*r1dayL.^params[4]/(250^.5).*ln(r1dayL));
mhess[4,4] = sumc(cb.*bd.^2 + ab.*bd.*ln(r1dayL));
mhess[1,2] = mhess[2,1];
mhess[1,3] = mhess[3,1]; mhess[2,3] = mhess[3,2];
mhess[1,4] = mhess[4,1]; mhess[2,4] = mhess[4,2]; mhess[3,4] = mhess[4,3];

mm.hessian = mhess;
endif;
retp(mm);
endp;
When loading data, I divide the numbers by 100 to change the scale of the interest rate observations 2.34 to percents like 0.0234.
When I try to modify my code to estimate a regime switch CKLS, I dont know how though I know the loglikelihood function, could you please give me some advice?
Thank U very much.

0

@RonSchoenberg, I am so sorry to disturb you, do you know the error of my program?

Best Regards!

0

"Params[2,1], the parameter "b" of the above CKLS model, should be some kind of a long-run mean interest rate, thus with the magnitude of 0.03, so its really unreasonable on a "43.3941". Params[3,1], "sigma", couldnt be such a large number thought its constrained."

It may be unreasonable, but you are constraining it to the interval [-50,50].  If you feel that it should be on a much smaller interval you will need to set that constraint.  This is what you have:

c0.Bounds = { -50 50,
              -50 50,
             0.0001 50,
               0 5 };

If it's some kind of mean interest rate then it should be constrained to be positive, and if 0.03 is reasonable then perhaps you should constrain it to the unit interval:

c0.Bounds = { -50 50,
             0.0001 1,
             0.0001 50,
               0 5 };

 

Also, you should put the calculation of aa and ab inside an if statement so that it would be executed when only the function value is required:

 

if ind[2] or ind[3];

   aa = (r1dayF-mean)./(std.^2);
   ab = (r1dayF-mean).^2 ./(std.^3)-1 ./std;

endif;

 

That parameter 3 is on a very large constraint boundary, and that parameter 2 estimate is very large suggests two possibilities.   First, that the function is not being properly computed.  But that the gradient and Hessian calculation test out argues against that.   It is unlikely you made the same error twice in the same way.  However, it is still possible that you have an error in your function and it got carried into the gradient and Hessian calculations.

Second, your data may not fit you model.  You may be trying to pound a hexagonal peg into a pentagonal hole.   When you constrain parameter 2 to the unit interval you will find that the model will fit very poorly.  This is evident from the fact that the unconstrained estimate of parameter 2 is so far away from a reasonable estimate.   The constrained estimate will have a very large value for it's derivative indicating a poor fit.

Given you are correctly computing the log-likelihood, you may need to modify your model with new parameters to better reflect the processes underlying the generation of your data.

 

 

 

 

 

 

 

 

0

@RonSchoenberg.

Because the CIR model is a special case of the CKLS model with params[4] being a constant .5, so I tried the following test:

I changed the last parameter so as to set it equal to a constant .5 and the remaining part of the program unchanged, as a result, I got the perfect estimation that I had with CIR specification. So, I think the program of CKLS specification is correct. And I got something of interest.

I set the params[4] to a constant arrange from 0.1 to 2.5, like .1, .3, .5(CIR), .7, .9, 1, 1.5, 1.9, 2, 2.5. I found that the resultant maximum loglikelihood increases from .1 to 1.9, while decreasing thereafter. And, the fitness deteriorates from about 1, giving large std.err., Prob. and more wide confidence limits.

Regarding those findings, could I just quit the CKLS model for its bad fitness or there may still be some mistakes during the cmlmt solving?

And, when trying to extend the model to a Regime-Switching model, is there anything that need special attention? I have done the regime-switching extending with each regime being a Vasicek and CIR, the result is very good but the Probs of some estimators are too large at around .4 or more. Do you have any idea about what error may lead to this kind of high Prob?

Great Thanks!

Sincerely!

0

Just seconds ago, I changed the scale of my data from .0234 to 2.34, and then run the same program. The estimation result really surprised me:


 

Parameters Estimates Std. err. Est./s.e. Prob. Gradient
---------------------------------------------------------------------
params[1,1] -3.1646     1.0050         -3.149         0.0016     0.0000
params[2,1] 0.6889      0.1975          3.488          0.0005     0.0000
params[3,1] 0.6347     0.0230      27.566          0.0000    0.0000
params[4,1] 1.9912      0.0419        47.504         0.0000     0.0000


The change is really magnificent, the estimated model fits the data perfectly.

I dont know why this kind of change occurs.

 

0

Computer numbers behave differently from real numbers.   In most computers there are 16 decimal places of accuracy, and this amount of accuracy is achieved by storing the number as an abscissa, the 16 or so numbers, and an exponent:  a.aaaaaaaaaaaaaaa x 10^eee.   Subtraction/Addition is the most destructive operation on a computer because in order to perform it the exponents have to made the same.  If one number is very small and the other very large, places of accuracy are dumped off the end.  It is thus very important to construct your calculations to avoid this.

You had mentioned earlier that you had scaled your data, which I took to be a very good thing to do, but I haven't seen your data to check whether that was going to be enough.  I'm glad to see that your further scaling is producing the desired result.   This experience should reinforce your initial impulse to pay close attention to scaling the data.  In my own experience with Garch models I find scaling to be critical.

0

@RonSchoenberg.

Sorry, I dont know how to attach files here, so I paste part of my data instead. Data in the following table is raw, I divide them by 100 as I stated before in my orginal programs, through which I get perfect result fitting Vasicek and CIR models but very poor with CKLS until I remain the data unscaled.

I thought that I would get a good fit modelling CKLS, but failed. Still dont know where the error is.

2.1184
2.0990
2.0922
2.0955
2.0943
2.0889
2.0801
2.0746
2.0723
2.0667
2.0665
2.0573
2.0569
2.0592
2.1170
2.1568
2.1600
2.1885
2.2023
2.2057
2.2055
2.2312
2.3023
2.4264
2.5177
2.6198
2.8071
3.6074
3.5355
3.2292
3.1918
3.2645
3.4289
3.5158
3.5399
3.5101
3.2050
2.7744
2.5547
2.4161
2.3370
2.1803
2.1095
2.0640
2.0063
1.9919
1.9914
1.9181
1.8931
1.8589
1.7777
1.7247
1.6566
1.6130
1.6015
1.6311
1.6123
1.6431
1.7185
1.7107
1.7203
1.5651
1.5675
1.4310
1.4047
1.3721
1.3546
1.3519
1.3344
1.3051
1.2997
1.2809
1.3171
1.4236
1.4126
1.4197
1.5972
1.7079
1.7541
1.7889
1.7830
1.7838
1.7780
1.7602
1.7549
1.7593
1.7848
1.8392
1.8983
2.7851
4.0016
3.3685
4.0174
3.0934
1.9940
1.5143
1.8147
1.7920
1.6928
1.6526
1.5903
1.5194
1.4999
1.4683
1.4375
1.4153
1.3952
1.3800
1.3643
1.3535
1.3513
1.4088
1.5313
1.5551
1.6077
1.6487
1.6871
1.7466
1.7397
1.8016
1.8039
1.8057
1.8019
1.7945
1.7932
1.8913
1.9959
2.0004
2.0110
2.0310
2.0319
2.1153
2.4598
2.4650
2.4762
3.0403
3.3562
3.3857
3.5010
3.5071
3.0190
2.5129
1.9807
1.6710
1.6303
1.6507
1.6777
1.7021
1.9034
2.0008
1.9204
1.9246
1.9076
1.8502
1.8403
1.8086
1.8002
1.7905
1.8090
1.7965
1.7925
1.7967
1.7986
2.1005
2.0487
2.0477
2.0179
1.9837
1.9270
1.8869
1.8618
1.8285
1.8359
1.9088
2.5378
2.6040
2.6731
2.3304
2.1041
1.9006
1.7917
1.7241
1.6741
1.7784
1.7867
2.3693
2.3407
2.3226
2.3088
2.2445
2.6546
2.5790
2.5502
2.5338
2.4398
2.3874
2.1915
1.9627
1.8821
1.8488
1.8565
1.8441
1.8207
1.8510
1.8368
2.0376
2.0267
1.9919
1.9773
1.9671
1.9455
1.9807
1.9461
1.8763
1.8548
1.9909
1.9163
1.8084
1.7703
1.7381
1.7200
2.1618
2.1632
1.8430
1.7498
1.8873
1.9409
1.8820
1.8626
1.8197
1.8860
1.8122
1.8198
2.0446
2.2581
2.8184
2.8864
3.0194
4.0206
3.2914
3.1432
3.2635
3.7760
4.2018
3.8090
3.7234
3.9055
2.6627
2.1150
2.0026
1.9925
1.9981
1.9948
1.8974
1.9194
1.9120
1.8986
1.9072
1.8968
1.8977
1.9301
1.9517
2.2086
3.5629
5.0844
8.5282
2.9014
2.8379
2.8928
2.9161
2.3098
2.2977
2.2918
1.8649
1.9460
1.8902
1.8865
1.8839
1.8568
2.0909
1.9107
1.8938
1.8926
1.8948
1.9018
1.9025
1.8833
1.8921
1.8946
1.8809
1.8635
1.8468
1.8517
1.8720
1.8760
1.8973
1.9118
1.9135
1.9087
1.9052
1.9039
1.9030
1.8812
1.8955
1.8914
1.8928
1.8800
1.9094
1.9167
1.8421
1.8181
0

And, two more questions:

  1. I fit a GARCH model with the above data but Gauss said "Matrix Singular", the error might be that the variance part my come up with a negative one, though I set that variance to be 0.0001 when negative values occurs, and obviously this doesnt work.

    // GARCH Model: r(t) = a + b*r(t-1) + sigma(t)*w(t), where w(t) are i.i.d normal with (0,dt)
    // sigma(t+1)^2 = alpha0 + alpha1*{r(t)-[a + b*r(t-1)]}^2 + beta*sigma(t)^2
    // r(t)|r(t-1) is normal with ( a+b*r(t-1),sigma(t)^2 )
    library cmlmt;

    // contains the structure definitions
    #include cmlmt.sdf

    // load data
    r1dayL = xlsReadM("SHIBOR.xlsx","b2:b2059",1,0)/100;
    r1dayF = xlsReadM("SHIBOR.xlsx","b3:b2060",1,0)/100;

    struct DS d0;
    // create a default data structure
    d0 = reshape(dsCreate,2,1);
    d0[1].datamatrix = r1dayF;
    d0[2].datamatrix = r1dayL;

    struct PV p0;
    // creates a default parameter structure
    p0 = pvCreate;
    p0 = pvPackm(p0,0|1|0|.5|.5|1,"params",1|1|1|1|1|1); // garch parameters (a,b,alpha0,alpha1,beta,sigma1)

    struct cmlmtControl c0;
    // creates a default structure
    c0 = cmlmtControlCreate;
    c0.Bounds = { -50 50,
    -50 50,
    -50 50,
    -50 50,
    -50 50,
    0.0001 50 };
    c0.NumObs = 2057; // the first observation is assumed to be known
    c0.MaxIters = 10000;
    c0.gradCheck = 1;

    struct cmlmtResults out;

    out = cmlmt(&garchlnl,p0,d0,c0);

    // prints the results
    call cmlmtprt(out);
    print out.Hessian;

    proc garchlnl(struct PV p0, struct DS d0, ind);
    local params,r1dayF,r1dayL,n,m,mean,sigmaVector,dr,lnpdf;

    struct modelResults mm;
    params = pvUnpack(p0,"params");
    /* a = params[1];
    b = params[2];
    alpha0 = params[3];
    alpha1 = params[4];
    beta = params[5];
    sigma1 = params[6]; */

    r1dayF = d0[1].datamatrix;
    r1dayL = d0[2].datamatrix;
    n = rows(r1dayL);

    m = params[1] + params[2]*r1dayL;
    mean = m[2:n,1]; // the first observation is excluded when calculating the loglikelihood function
    // for the assumption that r1 is known

    sigmaVector = zeros(n-1,1); // the path of the sigma_squared
    dr = r1dayF - m;

    for i (1,n-2,1);
    sigmaVector[1] = params[6];
    sigmaVector[i+1] = params[3] + params[4]*dr[i]^2 + params[5]*sigmaVector[i];
    if sigmaVector[i+1] lt 0;
    sigmaVector[i+1] = .00001;
    endif;
    endfor;

    if ind[1];
    lnpdf = -.5*ln(2*pi*sigmaVector)-dr[2:n,1].^2 ./(2*sigmaVector);
    mm.function = sumc(lnpdf);
    endif;

    retp(mm);
    endp;


     

  2. when fitting a regime-switch model, like regime-switch-vasicek or regime-switch-cir, I get a result and the result of each fit seems very good at first glance. But the Probs of some estimator are too large, could you please tell me the reason might be and how to analyze the result?

    Covariance of the parameters computed by the following method:
    ML covariance matrix
    Parameters Estimates Std. err. Est./s.e. Prob. Gradient
    ---------------------------------------------------------------------
    paramsR1[1,1]      0.2406       0.1946     1.237      0.2162       0.0152
    paramsR1[2,1]      0.0334       0.0194     1.725      0.0845      0.0949
    paramsR1[3,1]      0.0048      0.0002     19.035   0.0000     0.0365
    paramsR1[4,1]      0.1343       0.0115      11.673    0.0000     0.0336
    paramsR2[1,1]      33.0410    4.8601      6.798     0.0000      0.0001
    paramsR2[2,1]     0.0308       0.0018     16.746    0.0000    -0.0079
    paramsR2[3,1]     0.1019        0.0031     33.217    0.0000     0.0431
    paramsR2[4,1]     0.2530       0.0206     12.258    0.0000       0.0039


Thanks, and your kindness is highly appreciated!

Sincerely!

Jeremy Lee

 

0

1)  Enforcing constraints using an if statement doesn't work because it isn't differentiable.  To constraint the variances to be greater than zero you need to use the nonlinear constraint feature of CMLMT.  Write a procedure that computes the variances and return them.  You will need to add a small value, as you've already realized, because the constraints in CMLMT are >= and you don't want them to be equal to zero.

Set the member of the cmlmtControl structure, ineqProc, to a pointer to your nonlinear constraint procedure:

c0.ineqProc = &VarConst;

 

And write the procedure:

 

proc VarConst(struct PV p, struct DS d);

local params,r1dayF,r1dayL,n,m,mean,sigmaVector,dr,lnpdf;

params = pvUnpack(p0,"params");
/* a = params[1];
b = params[2];
alpha0 = params[3];
alpha1 = params[4];
beta = params[5];
sigma1 = params[6]; */

r1dayF = d0[1].datamatrix;
r1dayL = d0[2].datamatrix;
n = rows(r1dayL);

m = params[1] + params[2]*r1dayL;
mean = m[2:n,1]; // the first observation is excluded when calculating the loglikelihood function
// for the assumption that r1 is known

sigmaVector = zeros(n-1,1); // the path of the sigma_squared
dr = r1dayF - m;

for i (1,n-2,1);
sigmaVector[1] = params[6];
sigmaVector[i+1] = params[3] + params[4]*dr[i]^2 + params[5]*sigmaVector[i];
endfor;

retp(sigmaVector + .0001);

endp;

0

Those parameters where the probability is greater than some small value that you select, .05, usually, you are not able to reject the null hypothesis that the population parameter is zero.

If those are not critical theoretical parameters, you should leave them in even though they are not statistically significant because they help out the estimation of the other parameters.

If they are theoretically important, then you might want to draw some theoretical conclusion that they might be zero in the population.   I say "might" because your estimate might have low power because of insufficient data for estimating that parameter, and that with good data you might find that it is statistically significant.

0

Very sad to say my estimation still not good.


You must login to post answers.

Have a Specific Question?

Get a real answer from a real person

Need Support?

Get help from our friendly experts.

Try GAUSS for 14 days for FREE

See what GAUSS can do for your data

© Aptech Systems, Inc. All rights reserved.

Privacy Policy