<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>unit root &#8211; Aptech</title>
	<atom:link href="https://www.aptech.com/blog/tag/unit-root/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aptech.com</link>
	<description>GAUSS Software - Fastest Platform for Data Analytics</description>
	<lastBuildDate>Mon, 10 Mar 2025 17:11:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>How to Run the Fourier LM Test (Video)</title>
		<link>https://www.aptech.com/blog/how-to-run-the-fourier-lm-test-video/</link>
					<comments>https://www.aptech.com/blog/how-to-run-the-fourier-lm-test-video/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 24 Sep 2021 15:05:04 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[structural breaks]]></category>
		<category><![CDATA[unit root]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11581728</guid>

					<description><![CDATA[Learn everything you need to know to run the Fourier LM unit root test with your data and interpret the results.]]></description>
										<content:encoded><![CDATA[<iframe width="560" height="315" src="https://www.youtube.com/embed/VNP7TqC5Goc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>Learn everything you need to know to run the Fourier LM unit root test with your data and interpret the results.</p>
<h3 id="video-chapters">Video chapters:</h3>
<ul>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=0s">0:00 Introduction</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=17s">0:17 Why use the Fourier LM test?</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=75s">1:15 Run the Fourier LM example</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=133s">2:13 Explanation of test inputs</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=233s">3:53 Example file results</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=242s">4:02 Run Fourier LM test on new data</a></li>
</ul>
<h3 id="additional-resources">Additional Resources</h3>
<ul>
<li><a href="https://www.aptech.com/why-gauss-for-unit-root-testing/#ur_test_guide">Unit Root Test Selection Guide</a></li>
<li><a href="https://www.aptech.com/blog/a-guide-to-conducting-cointegration-tests/">Guide to Conducting Cointegration Tests</a> </li>
<li><a href="https://www.aptech.com/blog/how-to-interpret-cointegration-test-results/">How to Interpret Cointegration Test Results</a></li>
<li><a href="https://docs.aptech.com/gauss/data-management.html">GAUSS Data Management Guide</a></li>
<li><a href="https://www.aptech.com/blog/the-current-working-directory-what-you-need-to-know/">GAUSS Working Directory</a></li>
</ul>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/how-to-run-the-fourier-lm-test-video/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Panel Data Stationarity Test With Structural Breaks</title>
		<link>https://www.aptech.com/blog/panel-data-stationarity-test-with-structural-breaks/</link>
					<comments>https://www.aptech.com/blog/panel-data-stationarity-test-with-structural-breaks/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 02 Oct 2020 05:24:31 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Panel data]]></category>
		<category><![CDATA[structural breaks]]></category>
		<category><![CDATA[unit root]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=21878</guid>

					<description><![CDATA[Reliable unit root testing is an important step of any time series analysis or panel data analysis. 

However, standard time series unit root tests and panel data unit root tests aren’t reliable when structural breaks are present. Because of this, when structural breaks are suspected, we must employ unit root tests that properly incorporate these breaks. 

Today we will examine one of those tests, the Carrion-i-Silvestre, et al. (2005) panel data test for stationarity in the presence of multiple structural breaks.]]></description>
										<content:encoded><![CDATA[<p>    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></p>
<h3 id="introduction">Introduction</h3>
<p>The validity of many <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">time series models</a> and <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">panel data models</a> requires that the underlying data is stationary. As such, reliable <a href="https://www.aptech.com/why-gauss-for-unit-root-testing/" target="_blank" rel="noopener">unit root testing</a> is an important step of any time series analysis or panel data analysis. </p>
<p>However, standard time series unit root tests and panel data unit root tests aren’t reliable when <a href="https://www.aptech.com/structural-breaks/" target="_blank" rel="noopener">structural breaks</a> are present. Because of this, when structural breaks are suspected, we must employ unit root tests that properly incorporate these breaks. </p>
<p>Today we will examine one of those tests, the Carrion-i-Silvestre, et al. (2005) panel data test for stationarity in the presence of multiple structural breaks.</p>
<h2 id="why-panel-data-unit-root-testing">Why Panel Data Unit Root Testing?</h2>
<p>We may be tempted when working with panel data to treat the data as individual time-series, performing unit root testing on each one separately. However, one of the fundamental ideas of panel data is that there is a shared underlying component that connects the group. </p>
<p>It is this shared component, that suggests that there are advantages to be gained from testing the panel data collectively:</p>
<ul>
<li>Panel data contains more combined information and variation than pure time-series data or cross-sectional data.   </li>
<li>Collectively testing for unit roots in panels provides more power than testing individual series.  </li>
<li>Panel data unit root tests are more likely than time series unit root tests to have standard asymptotic distributions. </li>
</ul>
<p>Put simply, when dealing with panel data, using tests designed specifically for panel data and testing the panel collectively, can lead to more reliable results.</p>
<div class="alert alert-info" role="alert">For more background on unit root testing, see our previous blog post, <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/" target="_blank" rel="noopener">“How to Conduct Unit Root Tests in GAUSS”</a>.</div>
<h2 id="why-do-we-need-to-worry-about-structural-breaks">Why do we Need to Worry About Structural Breaks?</h2>
<p>It is important to properly address structural breaks when conducting unit root testing because most <strong>standard unit root tests will bias towards non-rejection</strong> of the unit root test. We discuss this in greater detail in our <a href="https://www.aptech.com/blog/unit-root-tests-with-structural-breaks/" target="_blank" rel="noopener">“Unit Root Tests with Structural Breaks”</a> blog.</p>
<h2 id="panel-data-stationarity-test-with-structural-breaks">Panel Data Stationarity Test with Structural Breaks</h2>
<p>The Carrion-i-Silvestre, <em>et al.</em> (2005) panel data stationarity test introduces a number of important testing features:</p>
<ul>
<li>Tests the null hypothesis of stationarity against the alternative of non-stationarity.  </li>
<li>Allows for multiple, unknown structural breaks.  </li>
<li>Accommodates shifts in the mean and/or trend of the individual time series.   </li>
<li>Does not require the same breaks across the entire panel but, rather, allows for each individual to have a different number of breaks at different dates.   </li>
<li>Allows for homogeneous or heterogeneous long-run variances across individuals.  </li>
</ul>
<div style="text-align:center;background-color:#37444d;padding-top:40px;padding-bottom:40px;"><span style="color:#FFFFFF">Deciding which unit root test is right for your data?</span> <a href="https://www.aptech.com/why-gauss-for-unit-root-testing/#ur_test_guide">Download our Unit Root Selection Guide!</a></div>
<h2 id="conducting-panel-data-stationarity-tests-in-gauss">Conducting Panel Data Stationarity Tests in GAUSS</h2>
<h3 id="where-can-i-find-the-tests">Where can I Find the Tests?</h3>
<p>The panel data stationarity test with structural breaks is implemented by the <a href="https://docs.aptech.com/gauss/tspdlib/docs/pd_kpss.html" target="_blank" rel="noopener"><code>pd_kpss</code></a> procedure in the GAUSS <a href="https://docs.aptech.com/gauss/tspdlib/docs/tspdlib-landing.html" target="_blank" rel="noopener">tspdlib</a> library. </p>
<p>The library can be directly installed using the <a href="https://www.aptech.com/blog/gauss-package-manager-basics/" target="_blank" rel="noopener">GAUSS Package Manager</a>. </p>
<h3 id="what-format-should-my-data-be-in">What Format Should my Data be in?</h3>
<p>The <code>pd_kpss</code> procedure takes panel data in wide format - this means that each column of your data matrix should contain the time series observations for a different individual in the panel. </p>
<p>For example, if we have 100 observations of real GDP for 3 countries, our test data will be 100 x 3 matrix.</p>
<table>
<thead>
<tr>
<th>Observation #</th>
<th>Country A</th>
<th>Country B</th>
<th>Country C</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1.11</td>
<td>1.40</td>
<td>1.39</td>
</tr>
<tr>
<td>2</td>
<td>1.14</td>
<td>1.37</td>
<td>1.34</td>
</tr>
<tr>
<td>3</td>
<td>1.27</td>
<td>1.45</td>
<td>1.28</td>
</tr>
<tr>
<td>4</td>
<td>1.19</td>
<td>1.51</td>
<td>1.35</td>
</tr>
<tr>
<td>$\vdots$</td>
<td>$\vdots$</td>
<td>$\vdots$</td>
<td>$\vdots$</td>
</tr>
<tr>
<td>99</td>
<td>1.53</td>
<td>1.75</td>
<td>1.65</td>
</tr>
<tr>
<td>100</td>
<td>1.68</td>
<td>1.78</td>
<td>1.67</td>
</tr>
</tbody>
</table>
<h3 id="how-do-i-call-the-test-procedure">How do I Call the Test Procedure?</h3>
<p>The first step to implementing the panel date stationarity test with structural breaks in GAUSS is to load the <code>tspdlib</code> library. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">library tspdlib;</code></pre>
<p>This statement provides access to all the procedures in the <code>tspdlib</code> libraries. After loading the libraries, the <code>pd_kpss</code> procedure can be called directly from the command line or within a program file. </p>
<p>The <code>pd_kpss</code> procedure takes 2 required inputs and 5 optional arguments:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">{ testd_hom, testd_het, m_lee_est, brks } = pd_kpss(y, model, 
                                                       nbreaks,
                                                       bwl,
                                                       varm, 
                                                       pmax, 
                                                       b_ctl);</code></pre>
<hr>
<dl>
<dt>y</dt>
<dd>$T \times N$ Wide form panel data to be tested.</dd>
<dt>model</dt>
<dd>Scalar, model to be used when there are structural breaks found:
<table>
<tbody>
<tr><td>1</td><td>Constant (Hadri test)</td></tr>
<tr><td>2</td><td>Constant + trend (Hadri test)</td></tr>
<tr><td>3</td><td>Constant + shift (in mean)</td></tr>
<tr><td>4</td><td>Constant + trend + shift (in mean and trend)</td></tr>
</tbody>
</table></dd>
<dt>nbreaks</dt>
<dd>Scalar, Optional input, number of breaks to consider (up to 5). Default = 5.</dd>
<dt>bwl</dt>
<dd>Scalar, Optional input, bandwidth for the spectral window. Default = round(4 * (T/100)^(2/9)).</dd>
<dt>varm</dt>
<dd>Scalar, Optional input, kernel used for long-run variance computation. Default = 1:
<table>
<tbody>
<tr><td>1</td><td>iid</td></tr>
<tr><td>2</td><td>Bartlett.</td></tr>
<tr><td>3</td><td>Quadratoc spectral (QS).</td></tr>
<tr><td>4</td><td>Sul, Phillips, and Choi (2003) with the Bartlett kernel.</td></tr>
<tr><td>5</td><td>Sul, Phillips, and Choi (2003) with quadratic spectral kernel.</td></tr>
<tr><td>6</td><td>Kurozumi with the Bartlett kernel.</td></tr>
<tr><td>7</td><td>Kurozumi with quadratic spectral kernel.</td></tr>
</tbody>
</table></dd>
<dt>pmax</dt>
<dd>Scalar, Optional input, denotes the number of maximum lags that is used in the estimation of the AR(p) model for lrvar. The final number of lags is chosen using the BIC criterion. Default = 8.</dd>
<dt>b_ctl</dt>
<dd>Optional input, An instance of the <code>breakControl</code> structure controlling the setting for the Bai and Perron structural break estimation.</dd>
</dl>
<hr>
<p>The <code>pd_kpss</code> procedure provides 4 returns :</p>
<hr>
<dl>
<dt>test_hom</dt>
<dd>Scalar, stationarity test statistic with structural breaks and homogeneous variance.</dd>
<dt>test_het</dt>
<dd>Scalar, stationarity test statistic with structural breaks and heterogeneous variance.</dd>
<dt>kpss_test</dt>
<dd>Matrix, individual tests. This matrix contains the test statistics in the first column, the number of breaks in the second column, the BIC chosen optimal lags, and the LWZ chosen optimal lags.</dd>
<dt>brks</dt>
<dd>Matrix of estimated breaks. Breaks for each individual group are contained in separate rows.
<hr></dd>
</dl>
<h2 id="empirical-example">Empirical Example</h2>
<p>Let’s look further into testing for panel data stationarity with structural breaks using an empirical example.</p>
<h3 id="data-description">Data Description</h3>
<p>The dataset contains government deficit as a percentage of GDP for nine OECD countries. The time span ranges from 1995 to 2019. This gives us a balanced panel of 9 individuals and 25 time observations each. </p>
<h3 id="loading-our-data-into-gauss">Loading our data into GAUSS</h3>
<p>Our first step is to load the data from <code>govt-deficit-oecd.csv</code> using <a href="https://docs.aptech.com/gauss/loadd.html" target="_blank" rel="noopener"><code>loadd</code></a>. This <code>.csv</code> file contains three variables, <code>Country</code>, <code>Year</code>, and <code>Gov_deficit</code>. </p>
<p>We will load all three variables into a <a href="https://www.aptech.com/blog/what-is-a-gauss-dataframe-and-why-should-you-care/" target="_blank" rel="noopener">GAUSS dataframe</a>. Note that <code>loadd</code> automatically detects that <code>Country</code> is a categorical variable, and assigns the <code>category</code> type. However, we will need to convert <code>Year</code> to a <a href="https://www.aptech.com/blog/dates-and-times-made-easy/" target="_blank" rel="noopener">date variable</a>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Load all variables and convert country to numeric categories
data = loadd("govt-deficit-oecd.csv");

// Convert "Year" to a date variable
data = asDate(data, "%Y", "Year");</code></pre>
<p>This loads our data in long format (a 225x1 dataframe). Our next step, is to convert this to wide-format using the <a href="https://docs.aptech.com/gauss/dfwider.html" target="_blank" rel="noopener"><code>dfWider</code></a> procedure. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Specify names_from column 
names_from = "Country";

// Specify values_from column
values_from = 
// Convert from long to wide format
wide_data = df(data);</code></pre>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Delete first column which contains the year variable
govt_def = delcols(wide_data, 1);</code></pre>
<h3 id="setting-up-our-model-parameters">Setting up our Model Parameters</h3>
<p>With our loading and transformations complete, we are ready to set-up our testing parameters. For this test, we will allow for both a constant and trend.
All other parameters will be kept at their default values. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Specify which model to 
// Allow for both constant and trend.
model = 2;</code></pre>
<h3 id="calling-the-pd_kpss-procedure">Calling the <code>pd_kpss</code> Procedure</h3>
<p>Finally, we call the <code>pd_kpss</code> procedure:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">{ test_hom, test_het, kpss_test, brks } = pd_kpss(wide_data, model);</code></pre>
<h2 id="empirical-results">Empirical Results</h2>
<p>The <code>pd_kpss</code> output includes:</p>
<ul>
<li>A header describing the testing settings. </li>
<li>The <code>test_hom</code> and <code>test_het</code> test statistics along with associated p-values.</li>
<li>The critical values for both test statistics.  </li>
<li>The testing conclusions based on a comparison of the test statistics to the associated critical values. </li>
</ul>
<pre>Test:                                                PD KPSS
Ho:                                             Stationarity
Number of breaks:                                       None
LR variance:                                             iid
Model:                                Break in level &amp; trend
==============================================================
                                      PD KPSS          P-val

Homogenous                             14.352          0.000
Heterogenous                           10.425          0.000

Critical Values:
                            1%             5%            10%

Homogenous               2.326          1.645          1.282
Heterogenous             2.326          1.645          1.282
==============================================================

Homogenous var:
Reject the null hypothesis of stationarity at the 1% level.

Heterogenous var:
Reject the null hypothesis of stationarity at the 1% level.</pre>
<p>These results tell us that we can reject the null hypothesis of stationarity at the 1% level for both cases, homogenous and heterogenous variance.</p>
<p>The test results also include a table of individual test results and conclusions:</p>
<pre>==============================================================
Individual panel results
==============================================================
                                         KPSS    Num. Breaks

AUT                                     0.165          2.000
DEU                                     0.079          0.000
ESP                                     0.249          4.000
FRA                                     0.210          2.000
GBR                                     0.298          2.000
IRL                                     0.235          2.000
ITA                                     0.130          3.000
LUX                                     0.127          3.000
NOR                                     0.414          1.000

Critical Values:
                            1%             5%            10%

AUT                      0.059          0.048          0.043
DEU                      0.207          0.150          0.122
ESP                      0.035          0.031          0.028
FRA                      0.056          0.045          0.040
GBR                      0.058          0.046          0.041
IRL                      0.074          0.059          0.051
ITA                      0.055          0.045          0.041
LUX                      0.058          0.045          0.039
NOR                      0.083          0.066          0.058
==============================================================

AUT                                     Reject Ho ( 1% level)
DEU                                          Cannot reject Ho
ESP                                     Reject Ho ( 1% level)
FRA                                     Reject Ho ( 1% level)
GBR                                     Reject Ho ( 1% level)
IRL                                     Reject Ho ( 1% level)
ITA                                     Reject Ho ( 1% level)
LUX                                     Reject Ho ( 1% level)
NOR                                     Reject Ho ( 1% level)
==============================================================</pre>
<p>Finally, the <code>pd_kpss</code> procedure prints the estimated breakpoints for each individual in the panel.</p>
<pre>Group        Break 1      Break 2      Break 3      Break 4      Break 5<br />
AUT          2003         2008         .            .            .<br />
DEU          .            .            .            .            .<br />
ESP          1999         2006         2009         2012         .<br />
FRA          2001         2008         .            .            .<br />
GBR          2000         2008         .            .            .<br />
IRL          2007         2010         .            .            .<br />
ITA          1997         2006         2009         .            .<br />
LUX          1999         2004         2008         .            .<br />
NOR          2008         .            .            .            .            </pre>
<div class="alert alert-info" role="alert">For more information on how to view the matrices returned by <code>pd_kpss</code> see our <a href="https://www.aptech.com/resources/tutorials/introduction-to-gauss-viewing-data-in-gauss/" target="_blank" rel="noopener">data viewing tutorial</a>.</div>
<h2 id="interpreting-the-results">Interpreting the Results</h2>
<p>When interpreting the results from <code>pd_kpss</code> test, it helps to remember a few key things:</p>
<ul>
<li>The test considers the null hypothesis of <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/#what-is-a-stationary-time-series" target="_blank" rel="noopener">stationarity</a> against the alternative of non-stationarity.</li>
<li>We reject the null hypothesis of stationarity at 
<ul>
<li>Large values of the test statistic. </li>
<li>Small p-values. </li>
</ul></li>
</ul>
<p>Notice that the TSPDLIB library conveniently provides interpretations for the <code>pd_kpss</code> tests. </p>
<h3 id="panel-data-test-statistic">Panel Data Test Statistic</h3>
<p>The test statistic for our panel, assuming homogeneous variances:</p>
<ul>
<li>Is equal to 14.352 with a p-value of 0.0000.</li>
<li>Suggests that we reject the null hypothesis of stationarity at the 1% level. </li>
</ul>
<p>The test statistic for our panel, assuming heterogeneous variances:</p>
<ul>
<li>Is equal to 10.425 with a p-value of 0.000. </li>
<li>Suggests that we reject the null hypothesis of stationarity at the 1% level.</li>
</ul>
<p>These results tell us that regardless of whether we assume heterogeneous or homogenous variances, we can reject the null hypothesis of stationarity for the panel. Given this, we must make proper adjustments to account for non-stationarity when modeling our data. </p>
<h3 id="individual-test-results">Individual Test Results</h3>
<p><a href="https://www.aptech.com/wp-content/uploads/2020/09/pankpss-graph-spanish-1.jpeg"><img src="https://www.aptech.com/wp-content/uploads/2020/09/pankpss-graph-spanish-1.jpeg" alt="Panel data stationarity test with structural breaks. " width="75%" height="75%" class="aligncenter size-full wp-image-11580083" /></a></p>
<table>
<th>Country</th><th>Statistic</th><th>Breaks</th><th>Conclusion</th>
<tbody>
<tr><td>Austria</td><td>0.165</td><td>2003;2008</td><td>Reject null at 1%.</td></tr> 
<tr><td>France</td><td>0.210</td><td>2001;2008</td><td>Reject null at 1%.</td></tr>
<tr><td>Germany</td><td>0.079</td><td>None</td><td>Cannot reject null.</td></tr>
<tr><td>Ireland</td><td>0.235</td><td>2007;2010</td><td>Reject null at 1%.</td></tr>
<tr><td>Italy</td><td>0.130</td><td>1997;2006;2009</td><td>Reject null at 1%.</td></tr>
<tr><td>Luxemberg</td><td>0.127</td><td>1999;2004;2008</td><td>Reject null at 1%.</td></tr>
<tr><td>Norway</td><td>0.414</td><td>2008</td><td>Reject null at 1%.</td></tr>
<tr><td>Spain</td><td>0.249</td><td>1999;2006;2009;2012</td><td>Reject null at 1%.</td></tr>
<tr><td>United Kingdom</td><td>0.298</td><td>2000;2008</td><td>Reject null at 1%.</td></tr>

</tbody>
</table>
<h2 id="conclusion">Conclusion</h2>
<p>Todays's blog considers the panel data stationarity test proposed by Carrion-i-Silvestre, et al. (2005). This test is built upon two crucial aspects of unit root testing:</p>
<ul>
<li>Panel data specific tests should be used with panel data.</li>
<li>Structural breaks should be accounted for.</li>
</ul>
<p>Ignoring these two facts can result in unreliable results. </p>
<p>After today, you should have a stronger understanding of how to implement the panel data stationarity test with structural breaks in GAUSS and how to interpret the results. </p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/panel-data-structural-breaks-and-unit-root-testing/" target="_blank" rel="noopener">Panel data, structural breaks and unit root testing</a></li>
<li><a href="https://www.aptech.com/blog/panel-data-basics-one-way-individual-effects/" target="_blank" rel="noopener">Panel Data Basics: One-way Individual Effects</a></li>
<li><a href="https://www.aptech.com/blog/how-to-aggregate-panel-data-in-gauss/" target="_blank" rel="noopener">How to Aggregate Panel Data in GAUSS</a></li>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">Introduction to the Fundamentals of Panel Data</a></li>
<li><a href="https://www.aptech.com/blog/transforming-panel-data-to-long-form-in-gauss/" target="_blank" rel="noopener">Transforming Panel Data to Long Form in GAUSS</a></li>
<li><a href="https://www.aptech.com/blog/get-started-with-panel-data-in-gauss-video/" target="_blank" rel="noopener">Getting Started With Panel Data in GAUSS </a></li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/panel-data-stationarity-test-with-structural-breaks/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>How to Interpret Cointegration Test Results</title>
		<link>https://www.aptech.com/blog/how-to-interpret-cointegration-test-results/</link>
					<comments>https://www.aptech.com/blog/how-to-interpret-cointegration-test-results/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Tue, 26 May 2020 20:36:06 +0000</pubDate>
				<category><![CDATA[Time Series]]></category>
		<category><![CDATA[cointegration]]></category>
		<category><![CDATA[unit root]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=22067</guid>

					<description><![CDATA[In this blog, we will explore how to set up and interpret cointegration results using a real-world time series example. We will cover the case with no structural breaks as well as the case with one unknown structural break using tools from the GAUSS tspdlib library.
]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>In this blog we will explore how to <strong>set up and interpret cointegration results</strong> using a real-world time series example. We will cover the case with <strong>no structural breaks</strong> as well as the case with <strong>one unknown structural break</strong> using tools from the GAUSS <a href="https://github.com/aptech/tspdlib">tspdlib library</a>.</p>
<h2 id="dataset">Dataset</h2>
<p>In this blog, we will use the famous <a href="https://github.com/aptech/tspdlib/blob/master/examples/nelsonplosser.dta">Nelson-Plosser time series data</a>. The dataset contains macroeconomic fundamentals for the United States.</p>
<p>We will be using three of these fundamentals:</p>
<ul>
<li>M2 money stock.</li>
<li>Bond yield (measured by the basic yields of 30-year corporate bonds).</li>
<li>S&amp;P 500 index stock prices.</li>
</ul>
<p>The <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/">time series data</a> is annual data, covering 1900 - 1970.</p>
<h2 id="preparing-for-cointegration">Preparing for Cointegration</h2>
<p>In order to prepare for <a href="https://www.aptech.com/blog/a-guide-to-conducting-cointegration-tests/">cointegration testing</a>, we will take some preliminary time series modeling steps. We will:</p>
<ul>
<li><a href="https://www.aptech.com/blog/a-guide-to-conducting-cointegration-tests/#establishing-underlying-theory">Establish our underlying theory</a>.</li>
<li><a href="https://www.aptech.com/blog/a-guide-to-conducting-cointegration-tests/#time-series-visualization">Visualize our time series data</a>.</li>
<li><a href="https://www.aptech.com/blog/a-guide-to-conducting-cointegration-tests/#unit-root-testing">Perform unit root testing</a>. </li>
</ul>
<h3 id="establishing-an-underlying-theory">Establishing an Underlying Theory</h3>
<p>In this example, we will examine the macroeconomic question of whether stock prices are linked to macroeconomic indicators. In particular, we will examine if there is a cointegrated, long-run relationship between the S&amp;P 500 price index and monetary policy indicators of the M2 money stock and the bond yields. </p>
<p>Mathematically we will consider the cointegrated relationship:</p>
<p>$$y_{sp, t} = c + \beta_1 y_{money, t} + \beta_2y_{bond, t} + u_t$$ </p>
<h3 id="time-series-visualization">Time Series Visualization</h3>
<p>When <a href="https://www.aptech.com/resources/tutorials/tsmt/wpi-visualizing-time-series-data/">visualizing time series data</a>, we look for visual evidence of:</p>
<ul>
<li>The comovements between our variables.</li>
<li>The presence of deterministic components such as constants and time trends.</li>
<li>Potential <a href="https://www.aptech.com/structural-breaks/">structural breaks</a>.</li>
</ul>
<p><a href="https://www.aptech.com/wp-content/uploads/2020/05/time-series-plots-sm.jpg"><img src="https://www.aptech.com/wp-content/uploads/2020/05/time-series-plots-sm.jpg" alt="Time series plot to examine cointegration." width="879" height="371" class="aligncenter size-full wp-image-22071" /></a></p>
<p>Our time series plots give us some important considerations for our testing, providing visual evidence to support:</p>
<ol>
<li>Comovements between the variables.</li>
<li>At least one structural break in the time series dynamics of all three of our variables.</li>
<li>A potential time trend in the datasets, especially in the later years of the sample.</li>
</ol>
<h3 id="unit-root-testing">Unit Root Testing</h3>
<p>Prior to testing for cointegration between our time series data, we should check for <a href="https://www.aptech.com/why-gauss-for-unit-root-testing/">unit roots</a> in the data. We will do this using the <code>adf</code> procedure in the <code>tspdlib</code> library to conduct the <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/#the-augmented-dickey-fuller-test">Augmented Dickey-Fuller unit root test</a>. </p>
<table>
 <thead>
<tr><th>Variable</th><th>Test Statistic</th><th>1% Critical Value</th><th>5% Critical Value</th><th>10% Critical Value</th><th>Conclusion</th></tr></thead>
<tbody>
<tr><td><b>Money</b></td><td>1.621</td><td>-4.04</td><td>-3.45</td><td>-3.15</td><td>Cannot reject the null</td></tr>
<tr><td><b>Bond yield</b></td><td>-1.360</td><td>-4.04</td><td>-3.45</td><td>-3.15</td><td>Cannot reject the null</td></tr>
<tr><td><b>S&amp;P 500</b></td><td>-0.3842</td><td>-4.04</td><td>-3.45</td><td>-3.15</td><td>Cannot reject the null</td></tr>
</tbody>
</table>
<p>Our ADF test statistics are greater than the 10% critical value for all of our time series. This implies that we cannot reject the null hypothesis of a unit root for any of our time series data.  </p>
<div class="alert alert-info" role="alert">For detailed information on conducting unit root tests in GAUSS see our previous blog on <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/">“How to Conduct Unit Root Tests in GAUSS”</a>.</div>
<h3 id="unit-root-testing-with-structural-breaks">Unit Root Testing with Structural Breaks</h3>
<p>What about the potential structural break that we see in our time series data? Does this have an impact on our unit root testing? </p>
<p>Using the <code>adf_1break</code> procedure in the <code>tspdlib</code> library to test for <a href="https://www.aptech.com/blog/unit-root-tests-with-structural-breaks/">unit roots with a single structural break</a> in the trend and constant we get the following results. </p>
<table>
 <thead>
<tr><th>Variable</th><th>Test Statistic</th><th>Break Date</th><th>1% Critical Value</th><th>5% Critical Value</th><th>10% Critical Value</th><th>Conclusion</th></tr></thead>
<tbody>
<tr><td><b>Money</b></td><td>-4.844</td><td>1948</td><td>-5.57</td><td>-5.08</td><td>-4.82</td><td>Cannot reject the null</td></tr>
<tr><td><b>Bond yield</b></td><td>-3.226</td><td>1963</td><td>-5.57</td><td>-5.08</td><td>-4.82</td><td>Cannot reject the null</td></tr>
<tr><td><b>S&amp;P 500</b></td><td>-4.639</td><td>1945</td><td>-5.57</td><td>-5.08</td><td>-4.82</td><td>Cannot reject the null</td></tr>
</tbody>
</table>
<p>Our ADF test statistics again suggest that even when accounting for the structural break, we cannot reject the null hypothesis of a unit root for any of our time series data. </p>
<h2 id="conducting-our-cointegration-tests">Conducting our Cointegration Tests</h2>
<p>Having concluded that there is evidence for unit roots in our data, we can now run our cointegration tests. </p>
<p>When setting up cointegration tests, there are a number of assumptions that we must specify:</p>
<ul>
<li>Which normalization we want to use.</li>
<li>The deterministic components to include in our model.</li>
<li>The maximum number of lags to allow in our test.</li>
<li>The information criterion to use to select the optimal number of lags.  </li>
</ul>
<p>To better understand these general assumptions, let’s look at the simplest of our tests, the <a href="https://www.aptech.com/blog/a-guide-to-conducting-cointegration-tests/#the-engle-granger-cointegration-test">Engle-Granger</a> cointegration test.  </p>
<h3 id="normalization">Normalization</h3>
<p>In the two-stage, residual-based cointegration tests which we will consider today, normalization amounts to deciding which variable is our dependent variable and which variables are our independent variables in the cointegration regression. </p>
<p>We will choose our normalization to reflect our theoretical question of whether the S&amp;P 500 index is cointegrated with the money stock and the bond yield. As we mentioned earlier, this means we will consider the cointegrated relationship:</p>
<p>$$y_{sp, t} = c + \beta_1 y_{money, t} + \beta_2 y_{bond, t} + u_t$$ </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set fname to name of dataset
fname = "nelsonplosser.dta";

// Load three variables from the dataset 
// and remove rows with missing values
coint_data = packr(loadd(fname, "sp500 + m + bnd"));

// Define y and x matrix
y = coint_data[., 1];
x = coint_data[., 2 3];</code></pre>
<h3 id="the-deterministic-component">The Deterministic Component</h3>
<p>The second assumption we must make about our Engle-Granger test is which <code>model</code> we wish to use. To understand how to make this decision, let's look closer at what this input means. </p>
<p>The Engle-Granger test is a two-step test:</p>
<ul>
<li>Estimate the cointegration regression. </li>
<li>Test for stationary in the residuals using the ADF unit root test. </li>
</ul>
<p>When we specify which model to use we impact two things:</p>
<ol>
<li>The deterministic components which are used in the first-stage cointegration regression.</li>
<li>The distribution of the test statistic. </li>
</ol>
<p>There are three options to choose from:</p>
<ol>
<li>
<p>No constant or trend (<code>model = 0</code>)
$$y_{sp, t} = \beta_1 y_{money, t} + \beta_2 y_{bond, t} + u_t$$ </p>
</li>
<li>
<p>Constant (<code>model = 1</code>)
$$y_{sp, t} = \alpha + \beta_1 y_{money, t} + \beta_2 y_{bond, t} + u_t$$ </p>
</li>
<li>Constant and trend (<code>model = 2</code>)
$$y_{sp, t} = \alpha + \delta t + \beta_1 y_{money, t} + \beta_2 y_{bond, t} + u_t$$ </li>
</ol>
<p>For our example, we will include a constant and trend in our first-stage cointegration regression by setting:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Select model with constant and trend
model = 2;</code></pre>
<h3 id="the-lag-specifications">The Lag Specifications</h3>
<p>In the second-stage ADF residual unit root test, the error terms should be serially independent.  To account for possible autocorrelation, lags of the first differences of the residual can be included in ADF test regression.</p>
<p>The GAUSS <code>coint_egranger</code> will automatically determine the optimal number of lags to include in the second-stage regression based on two user inputs:</p>
<ol>
<li>The maximum number of lags to allow.</li>
<li>The criterion to use to determine the optimal number of lags:
<ul>
<li>The Akaike information criterion (AIC) [<code>ic =  0</code>]</li>
<li>The Schwarz information criterion (SIC) [<code>ic = 1</code>]</li>
<li>The t-stat criterion [<code>ic = 2</code>]</li>
</ul></li>
</ol>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Information Criterion: 
** 1=Akaike; 
** 2=Schwarz; 
** 3=t-stat sign.
*/
ic = 2; 

// Maximum number of lags 
pmax = 12;  </code></pre>
<h3 id="calling-our-cointegration-test">Calling our Cointegration Test</h3>
<p>Now that we have loaded our data and chosen the test settings, we can call the <code>coint_egranger</code> procedure:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Perform Engle-Granger Cointegration Test
{ tau_eg, cvADF_eg } = coint_egranger(y, x, model, pmax, ic);</code></pre>
<h2 id="interpreting-our-cointegration-results">Interpreting Our Cointegration Results</h2>
<p>In order to interpret our cointegration results, let's revisit the two steps of the Engle-Granger test:</p>
<ol>
<li>Estimate the cointegration regression.</li>
<li>Test the residuals from the cointegration regression for unit roots. </li>
</ol>
<p>The Engle-Granger test statistic for cointegration reduces to an ADF unit root test of the residuals of the cointegration regression:</p>
<ul>
<li>If the residuals contain a unit root, then there is no cointegration.</li>
<li>The null hypothesis of the ADF test is that the residuals have a unit root. Therefore, the Engle-Granger test considers the null hypothesis that there is no cointegration.</li>
<li>As the Engle-Granger test statistic decreases:
<ul>
<li>We are more likely to reject the null hypothesis of no cointegration.</li>
<li>We have stronger evidence that the variables are cointegrated.</li>
</ul></li>
</ul>
<p>After running our cointegration test we obtain the following results:</p>
<pre>-----------Engle-Granger Test---------------------------
-----------Constant and Trend---------------------------
H0: no co-integration (EG, 1987 &amp; P0, 1990)

     Test      Statistic   CV(1%,      5%,      10%)
   ------      -------------------------------------
   EG_ADF         -2.105   -4.645   -4.157   -3.843
</pre>
<p>We can see that:</p>
<ul>
<li>Our test statistic of -2.105 is larger than the critical values at the 1%, 5%, and 10% levels. </li>
<li>We cannot reject the null hypothesis of no cointegration. </li>
<li>We do not find evidence in support of the cointegration of the S&amp;P 500 with the U.S. money stock and bond yield. </li>
</ul>
<h2 id="conducting-our-cointegration-tests-with-one-structural-break">Conducting our Cointegration Tests with One Structural Break</h2>
<p>Earlier we saw that the potential structural break in our data did not change our unit root test conclusion. We should also see if the structural break has an impact on our cointegration testing.</p>
<p>To do this we will use the Gregory-Hansen cointegration test which can be implemented using the <code>coint_ghansen</code> test in the <code>tspdlib</code> library.</p>
<p>We can carry over all of our <code>coint_egranger</code> testing specifications, except our model specification. </p>
<h3 id="the-model-specification">The Model Specification</h3>
<p>When implementing the Gregory-Hansen test, we must decide on a model which specifies:</p>
<ul>
<li>Which deterministic components are present in the cointegration regression.</li>
<li>How the structural break affects the cointegration regression. </li>
</ul>
<p>There are four modeling options to choose from </p>
<ol>
<li>The level shift [<code>model = 1</code>]<br/><br/>
$$y_{sp, t} = \mu_1(1 - d_{\tau}) + \mu_{1,\tau} d_{\tau} + \beta_1 y_{money, t} + \beta_2 y_{bond, t} + u_t$$<br/>
In this model, there is a structural break at time $\tau$ and $d_{\tau}$ is an indicator variable equal to 1 when $t >= \tau$. The constant before the structural break is $\mu_1$ and the constant after the structural break is $\mu_2$.<br/><br/></li>
<li>The level shift with trend [<code>model = 2</code>]<br/><br/>
$$y_{sp, t} = \mu_1(1 - d_{\tau}) + \mu_{1,\tau} d_{\tau} + \delta t + \beta_1 y_{money, t} + \beta_2 y_{bond, t} + u_t$$<br/>
In this model, the structural break again affects the constant. However, there is also a time trend included in the model.<br/><br/></li>
<li>The regime shift [<code>model = 3</code>]<br/><br/>
$$y_{sp, t} = \mu_1(1 - d_{\tau}) + \mu_{1,\tau} d_{\tau} + \beta_1(1 - d_{\tau})y_{money, t} +$$ $$\beta_{1,\tau}d_{\tau}y_{money, t} + \beta_2(1 - d_{\tau}) y_{bond, t} + \beta_{2,\tau}d_{\tau}y_{bond, t} + u_t$$<br/>
In this model, the structural break affects the constant and regression coefficients.<br/><br/></li>
<li>The regime and trend shift shift [<code>model = 4</code>]<br/><br/>
$$y_{sp, t} = \mu_1(1 - d_{\tau}) + \mu_{1,\tau} d_{\tau} + \delta_1(1 - d_{\tau}) t + \delta_{1,\tau}d_{\tau}t + \beta_1(1 - d_{\tau})y_{money, t} +$$ $$\beta_{1,\tau}d_{\tau}y_{money, t} + \beta_2(1 - d_{\tau}) y_{bond, t} + \beta_{2,\tau}d_{\tau}y_{bond, t} + u_t$$</li>
</ol>
<p>In this model, the structural break again affects the constant, the regression coefficients, and the trend.</p>
<p>For example, let's consider the last case, where the constant, coefficients, and trend are all impacted by the structural break:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set fname to name of dataset
fname = "nelsonplosser.dta";

// Load three variables from the dataset
// and remove rows with missing values
coint_data = packr(loadd(fname, "sp500 + m + bnd"));

// Define y and x matrix
y = coint_data[., 1];
x = coint_data[., 2 3];

// Regime and trend shift
model = 4; 

/*
** Information Criterion: 
** 1=Akaike; 
** 2=Schwarz; 
** 3=t-stat sign.
*/
ic = 2; 

// Maximum number of lags 
pmax = 12;  

/*
** Long run variance computation
** 1 = iid
** 2 = Bartlett
** 3 = Quadratic Spectral (QS);
** 4 = SPC with Bartlett /see (Sul, Phillips &amp; Choi, 2005)
** 5 = SPC with QS;
** 6 = Kurozumi with Bartlett
** 7 = Kurozumi with QS
*/ 
varm = 1;

// Bandwidth for variance 
bwl=1;

// Data trimming
trimm=0.1;

// Perform cointegration test
{ ADF_min_gh, TBadf_gh, Zt_min_gh, TBzt_gh, Za_min_gh, TBza_gh, cvADFZt_gh, cvZa_gh } =
    coint_ghansen(y, x, model, bwl, ic, pmax, varm, trimm);</code></pre>
<h2 id="interpreting-our-cointegration-results-with-one-structural-break">Interpreting Our Cointegration Results with One Structural Break</h2>
<p>The <code>coint_ghansen</code> procedure provides more extensive results than the <code>coint_egranger</code> test. In particular, the <a href="https://www.aptech.com/blog/a-guide-to-conducting-cointegration-tests/#testing-for-cointegration-with-structural-breaks">Gregory-Hansen test</a>:</p>
<ul>
<li>Performs Augmented Dickey-Fuller testing on the residuals from the cointegration regression. </li>
<li>Perform the <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/#the-phillips-perron-test">Phillips-Perron testing</a> on the residuals from the cointegration regression. </li>
<li>Identifies structural breaks. </li>
</ul>
<h3 id="cointegration-results-with-one-structural-break">Cointegration results with one structural break</h3>
<p><b>Cointegration test results</b><br />
After calling the <code>coint_ghansen</code> procedure and testing all possible models, we obtain the following test statistic results:</p>
<table>
<thead>
<tr><th>Test</th><th>$ADF$ Test Statistic</th><th>$Z_t$ Test Statistic</th><th>$Z_{\alpha}$ Test Statistic</th><th>10% Critical Value $ADF$,$Z_t$</th><th>10% Critical Value $Z_{\alpha}$</th><th>Conclusion</th></tr>
</thead>
<tbody>
<tr><td><b>Gregory-Hansen, Level shift</b></td><td>-4.004</td><td>-3.819</td><td>-27.858</td><td>-4.690</td><td>-42.490</td><td>Cannot reject the null of no cointegration for $ADF$, $Z_t$, or $Z_{\alpha}$.</td></tr>
<tr><td><b>Gregory-Hansen, Level shift with trend</b></td><td>-3.889</td><td>-3.751</td><td>-27.618</td><td>-5.030</td><td>-48.94</td><td>Cannot reject the null of no cointegration for $ADF$, $Z_t$, or $Z_{\alpha}$.</td></tr>
<tr><td><b>Gregory-Hansen, Regime change</b></td><td>-4.658</td><td>-4.539</td><td>-32.766</td><td>-5.23</td><td>-52.85</td><td>Cannot reject the null of no cointegration for $ADF$, $Z_t$, or $Z_{\alpha}$.</td></tr>
<tr><td><b>Gregory-Hansen, Regime change with trend</b></td><td>-5.834</td><td>-4.484</td><td>-32.411</td><td>-5.72</td><td>-63.10</td><td>Cannot reject the null of no cointegration for  $ADF$, $Z_t$, or $Z_{\alpha}$.</td></tr>
</tbody>
</table>
<p>As we can see from these results, there is no evidence that our S&amp;P 500 Index is cointegrated with the money stock and bond yield. </p>
<p><b>Structural break results</b><br />
The <code>coint_ghansen</code> procedure also returns estimates for break dates based on the $ADF$, $Z_t$, and $Z_{\alpha}$ tests:</p>
<table>
<thead>
<tr><th>Test</th><th>$ADF$ Break Date</th><th>$Z_t$ Break Date</th><th>$Z_{\alpha}$ Break Date</th></tr>
</thead>
<tbody>
<tr><td><b>Gregory-Hansen, Level shift</b></td><td>1958</td><td>1956</td><td>1956</td></tr>
<tr><td><b>Gregory-Hansen, Level shift with trend</b></td><td>1958</td><td>1956</td><td>1956</td></tr>
<tr><td><b>Gregory-Hansen, Regime change</b></td><td>1955</td><td>1955</td><td>1955</td></tr>
<tr><td><b>Gregory-Hansen, Regime change with trend</b></td><td>1951</td><td>1953</td><td>1947</td></tr>
</tbody>
</table>
<h3 id="what-can-we-conclude-from-the-gregory-hansen-cointegration-test">What can we Conclude from the Gregory-Hansen Cointegration Test?</h3>
<p>The results from our Gregory Hansen cointegration test provide some important conclusions:</p>
<ul>
<li>There is no support for cointegration.</li>
<li>Incorporating a structural break does NOT change our conclusion that there is no cointegration. </li>
</ul>
<p>Note that while the Gregory-Hansen test does estimate break dates, it does not provide the statistical evidence to conclude whether these are statistically significant break dates or not. </p>
<h2 id="conclusion">Conclusion</h2>
<p>Today's blog looks closer at the Engle-Granger and Gregory-Hansen residual-based cointegration tests. By building a better understanding of how the tests work and what assumptions we make when running the tests, you will be better equipped to interpret the test results. </p>
<p>In particular, today we learned</p>
<ul>
<li>How to prepare for cointegration testing.</li>
<li>How to set up the specifications for cointegration tests. </li>
<li>How to interpret the results from the Engle-Granger and Gregory-Hansen cointegration tests. </li>
</ul>
<p>
    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/how-to-interpret-cointegration-test-results/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Panel data, structural breaks and unit root testing</title>
		<link>https://www.aptech.com/blog/panel-data-structural-breaks-and-unit-root-testing/</link>
					<comments>https://www.aptech.com/blog/panel-data-structural-breaks-and-unit-root-testing/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Sat, 23 Feb 2019 08:35:14 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Panel data]]></category>
		<category><![CDATA[panel data]]></category>
		<category><![CDATA[unit root]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=19541</guid>

					<description><![CDATA[In this blog, we extend  our <a href="https://www.aptech.com/blog/unit-root-tests-with-structural-breaks/">analysis of unit root testing</a> with <a href="https://www.aptech.com/structural-breaks/">structural breaks</a> to <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/">panel data</a>. Using panel data unit roots tests found in the GAUSS <a href="https://github.com/aptech/tspdlib">tspdlib</a> we consider if a panel of international current account balances collectively shows unit root behavior.
]]></description>
										<content:encoded><![CDATA[<p><img src="https://www.aptech.com/wp-content/uploads/2019/02/gblog-sb-02202018-1.png" alt="US Current Account Balance" /></p>
<h3 id="introduction">Introduction</h3>
<p>In this blog, we extend <a href="https://www.aptech.com/blog/unit-root-tests-with-structural-breaks/" target="_blank" rel="noopener">last week's</1> analysis of unit root testing with <a href="https://www.aptech.com/structural-breaks/" target="_blank" rel="noopener">structural breaks</a> to <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">panel data</a>.</p>
<p>We will again use the quarterly current account to GDP ratio but focus on a panel of data from five countries:  United States, United Kingdom, Australia, South Africa, and India. </p>
<p>Using panel data unit roots tests found in the <a href="https://docs.aptech.com/gauss/tspdlib/docs/tspdlib-landing.html" target="_blank" rel="noopener">GAUSS tspdlib library</a> we consider if the panel collectively shows unit root behavior.</p>
<h2 id="testing-for-unit-roots-in-panel-data">Testing for unit roots in panel data</h2>
<h3 id="why-panel-data">Why panel data</h3>
<p>There are a number of reasons we utilize <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">panel data</a> in econometrics (Baltagi, 2008). Panel data:</p>
<ul>
<li>Capture the idiosyncratic behaviors of individual groups with models like the fixed effects or random effects models.</li>
<li>Contain more information, more variability, and more efficiency.</li>
<li>Can detect and measure statistical effects that pure time-series or cross-section data can't. </li>
<li>Provide longer time-series for unit-root testing, which in turn leads to standard asymptotic behavior.</li>
</ul>
<h3 id="panel-data-unit-root-testing">Panel data unit root testing</h3>
<p>Today we will test for unit roots using the panel Lagrangian Multiplier (LM) unit-root test with structural breaks in the mean (Im, K., Lee, J., Tieslau, M., 2005):</p>
<ul>
<li>The panel LM test statistic averages the individual LM test statistics which are computed using the pooled likelihood function. </li>
<li>The asymptotic distribution of the test is robust to structural breaks. </li>
<li>The test considers the null unit root hypothesis against the alternative that at least one time series in the panel is stationary. </li>
</ul>
<h2 id="testing-our-panel">Testing our panel</h2>
<h3 id="setting-up-the-test">Setting up the test</h3>
<p>The panel LM test can be run using the <strong>GAUSS</strong> <a href="https://docs.aptech.com/gauss/tspdlib/docs/pdlm.html" target="_blank" rel="noopener">PDLM</a> procedure found in the GAUSS <strong>tspdlib</strong> library. The procedure has two required inputs and four additional optional arguments:</p>
<hr />
<dl>
<dt>y_test</dt>
<dd>T x N matrix,  the panel data to be tested.</dd>
<dt>model</dt>
<dd>Scalar,  indicates the type of model to be tested.<br>    1 = break in level.<br>    2 = break in level and trend.</dd>
<dt>nbreak</dt>
<dd>Scalar,  optional input, the number of breaks to allow. <br>    1 = one break.<br>    2 = two breaks. Default = 0.</dd>
<dt>pmax</dt>
<dd>Scalar,  optional input, maximum number of lags for Dy. 0 = no lags. Default = 8.</dd>
<dt>ic</dt>
<dd>Scalar,  optional input, the information criterion used to select lags. <br>    1 = Akaike. <br>    2 = Schwarz. <br>    3 = t-stat significance. Default = 3.</dd>
<dt>trimm</dt>
<dd>Scalar,  optional input, data trimming rate. Default = 0.10
<hr /></dd>
</dl>
<p>The <code>PDLM</code> procedure has five returns:</p>
<hr />
<dl>
<dt>Nlm</dt>
<dd>Vector,  the minimum test statistic for each cross-section.</dd>
<dt>Ntb</dt>
<dd>Vector,  location of break(s) for each cross-section.</dd>
<dt>Np</dt>
<dd>Scalar,  number of lags selected by chosen information criterion for each cross-section.</dd>
<dt>PDlm</dt>
<dd>Scalar,  panel LM statistic with N(0, 1).</dd>
<dt>pval</dt>
<dd>Scalar,  p-value of <em>PDlm</em>.</dd>
</dl>
<hr />
<h3 id="running-the-test">Running the test</h3>
<p>The test is easy to set up and run in GAUSS. We first load the <strong>tspdlib</strong> library and our data. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">library tspdlib;

// Load data
ca_panel = loadd("panel_ca.dat");
y_test = ca_panel[., 2:cols(ca_panel)];
</code></pre>
<p>Next, we specify that we want to run the model with level breaks and we call the <code>PDLM</code> procedure separately for the one break and two break models. We will keep all other parameters at their default values:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Specify to run model with 
// level breaks
model = 1;

// Run first with one break
nbreak = 1;

// Call PD LM with one level break
{ Nlm, Ntb, Np, PDlm, pval } = PDLM(y_test, model, nbreak);

// Run next with two breaks
nbreak = 2;

// Call PD LM with level break
{ Nlm, Ntb, Np, PDlm, pval } = PDLM(y_test, model, nbreak);</code></pre>
<h3 id="results">Results</h3>
<table style="border-collapse: collapse">
<tr>
<th>Country</th><th>Cross-section<br> test statistic</th><th>Break<br> location</th><th>Number of<br> lags</th><th>Conclusion</th>
</tr>
<tr><th colspan="5">Two break model</th></tr>
<tr>
<td>United States</td><td>-3.3067</td><td>1993 Q1, 2004 Q3</td><td>12</td><td>Reject the null</td>
</tr>
<tr>
<td>United Kingdom</td><td>-4.6080</td><td>1980 Q4, 1984 Q4</td><td>4</td><td>Reject the null</td>
</tr>
<tr>
<td>Australia</td><td>-3.9522</td><td>1970 Q3, 1977 Q4</td><td>12</td><td>Reject the null</td>
</tr>
<tr>
<td>South Africa</td><td>-5.6735</td><td>1976 Q4, 1983 Q4</td><td>4</td><td>Reject the null</td>
</tr>
<tr style="border-bottom: 2px solid #444;">
<td>India</td><td>-5.6734</td><td>1975 Q4, 2004 Q2</td><td>9&lt;</td><td>Reject the null</td>
</tr>
<tr style="border-top: 1px inset #444;"><td>Full Panel</td><td>-6.6339526</td><td>N/A</td><td>N/A</td><td>Reject the null</td>
</tr>
<tr><td colspan="5"> </td></tr>
<tr><th colspan="5">One break model</th></tr>

<tr>
<td>United States</td><td>-3.0504</td><td>1993 Q1</td><td>12&lt;</td><td>Reject the null</td>
</tr>
<tr>
<td>United Kingdom</td><td>-4.1213</td><td>1984 Q4</td><td>4</td><td>Reject the null</td>
</tr>
<tr>
<td>Australia</td><td>-3.1625</td><td>1980 Q2</td><td>12</td><td>Reject the null</td>
</tr>
<tr>
<td>South Africa</td><td>-5.1271</td><td>1979 Q4</td><td>4</td><td>Reject the null</td>
</tr>
<tr>
<td>India</td><td>-2.8001</td><td>1976 Q2</td><td>9</td><td>Reject the null</td>
</tr>
<tr style="border-top: 2px solid #444;"><td>Full Panel</td><td>-8.9118730</td><td>N/A</td><td>N/A</td><td>Reject the null</td>
</tr>
</table>
<p>Research on the presence of unit roots in current account balances has had mixed results. These results bring to the forefront the question of current account balance sustainability (Clower &amp; Ito, 2012). </p>
<p>Our panel tests with structural breaks unanimously reject the null hypothesis of unit roots for all cross-sections, as well as the combined panel. This adds support, at least for our small sample, to the idea that current account balances are sustainable and mean-reverting. </p>
<h2 id="conclusions">Conclusions</h2>
<p>Today we've learned about conducting panel data unit root testing in the presence of structural breaks using the LM test from  <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0084.2005.00125.x">(Im, K., Lee, J., Tieslau, M., 2005)</a>. After today you should have  a better understanding of:</p>
<ol>
<li>Some of the advantages of using panel-data.</li>
<li>How to test for unit roots in panel data using the LM test with structural breaks.</li>
<li>How to use the <a href="https://github.com/aptech/tspdlib">GAUSS tspdlib library</a> to test for unit roots with structural breaks.</li>
</ol>
<p>Code and data from this blog can be found <a href="https://github.com/aptech/gauss_blog/tree/master/time_series/panel-unitroot-2.22.19" target="_blank" rel="noopener">here</a>.</p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/panel-data-basics-one-way-individual-effects/" target="_blank" rel="noopener">Panel Data Basics: One-way Individual Effects</a></li>
<li><a href="https://www.aptech.com/blog/how-to-aggregate-panel-data-in-gauss/" target="_blank" rel="noopener">How to Aggregate Panel Data in GAUSS</a></li>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">Introduction to the Fundamentals of Panel Data</a></li>
<li><a href="https://www.aptech.com/blog/panel-data-stationarity-test-with-structural-breaks/" target="_blank" rel="noopener">Panel Data Stationarity Test With Structural Breaks</a></li>
<li><a href="https://www.aptech.com/blog/transforming-panel-data-to-long-form-in-gauss/" target="_blank" rel="noopener">Transforming Panel Data to Long Form in GAUSS</a></li>
</ol>
<h3 id="references">References</h3>
<p>Baltagi, B. (2008). <em>Econometric analysis of panel data</em>. John Wiley &amp; Sons.</p>
<p>Clower, E., &amp; Ito, H. (2012). The persistence of current account balances and its determinants: the implications for global rebalancing.</p>
<p>Im, K., Lee, J., Tieslau, M. (2005). Panel LM Unit-root Tests with Level Shifts. <em>Oxford Bulletin of Economics and Statistics</em> 67, 393–419.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/panel-data-structural-breaks-and-unit-root-testing/feed/</wfw:commentRss>
			<slash:comments>9</slash:comments>
		
		
			</item>
	</channel>
</rss>
