<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>structural breaks &#8211; Aptech</title>
	<atom:link href="https://www.aptech.com/blog/tag/structural-breaks/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aptech.com</link>
	<description>GAUSS Software - Fastest Platform for Data Analytics</description>
	<lastBuildDate>Thu, 08 May 2025 02:31:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>How to Run the Fourier LM Test (Video)</title>
		<link>https://www.aptech.com/blog/how-to-run-the-fourier-lm-test-video/</link>
					<comments>https://www.aptech.com/blog/how-to-run-the-fourier-lm-test-video/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 24 Sep 2021 15:05:04 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[structural breaks]]></category>
		<category><![CDATA[unit root]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11581728</guid>

					<description><![CDATA[Learn everything you need to know to run the Fourier LM unit root test with your data and interpret the results.]]></description>
										<content:encoded><![CDATA[<iframe width="560" height="315" src="https://www.youtube.com/embed/VNP7TqC5Goc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>Learn everything you need to know to run the Fourier LM unit root test with your data and interpret the results.</p>
<h3 id="video-chapters">Video chapters:</h3>
<ul>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=0s">0:00 Introduction</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=17s">0:17 Why use the Fourier LM test?</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=75s">1:15 Run the Fourier LM example</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=133s">2:13 Explanation of test inputs</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=233s">3:53 Example file results</a></li>
<li><a href="https://www.youtube.com/watch?v=VNP7TqC5Goc&amp;t=242s">4:02 Run Fourier LM test on new data</a></li>
</ul>
<h3 id="additional-resources">Additional Resources</h3>
<ul>
<li><a href="https://www.aptech.com/why-gauss-for-unit-root-testing/#ur_test_guide">Unit Root Test Selection Guide</a></li>
<li><a href="https://www.aptech.com/blog/a-guide-to-conducting-cointegration-tests/">Guide to Conducting Cointegration Tests</a> </li>
<li><a href="https://www.aptech.com/blog/how-to-interpret-cointegration-test-results/">How to Interpret Cointegration Test Results</a></li>
<li><a href="https://docs.aptech.com/gauss/data-management.html">GAUSS Data Management Guide</a></li>
<li><a href="https://www.aptech.com/blog/the-current-working-directory-what-you-need-to-know/">GAUSS Working Directory</a></li>
</ul>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/how-to-run-the-fourier-lm-test-video/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Introduction to Markov-Switching Models</title>
		<link>https://www.aptech.com/blog/introduction-to-markov-switching-models/</link>
					<comments>https://www.aptech.com/blog/introduction-to-markov-switching-models/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 03 Sep 2021 18:33:33 +0000</pubDate>
				<category><![CDATA[Time Series]]></category>
		<category><![CDATA[regime-switching]]></category>
		<category><![CDATA[structural breaks]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11581692</guid>

					<description><![CDATA[Markov-switching models offer a powerful tool for capturing the real-world behavior of time series data. Today's blog provides an introduction to Markov-switching models including: 
<ul>
<li>  What a regime switching model is and how it differs from a structural break model. </li>  
<li>   When we should use the regime switching model.</li>  
<li>  What a Markov-switching model is. </li>  
<li>   What tools we use to estimate Markov-switching models. </li>  ]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>Markov-switching models offer a powerful tool for capturing the real-world behavior of time series data. Today's blog provides an introduction to Markov-switching models including: </p>
<ul>
<li>What a regime switching model is and how it differs from a structural break model. </li>
<li>When we should use the regime switching model.</li>
<li>What a Markov-switching model is. </li>
<li>What tools we use to estimate Markov-switching models. </li>
</ul>
<h2 id="what-is-a-regime-switching-model">What Is A Regime Switching Model?</h2>
<p>Traditional <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/">time series models</a> assume that one set of model parameters can be used to describe the behavior of the data over all time. This assumption isn't always valid for what we encounter in real-world data.</p>
<p>Real-world time series data may have different characteristics, such as means and variances, across different time periods. Regime-switching models:</p>
<ul>
<li>Characterize data as falling into different, recurring “regimes” or “states”.</li>
<li>Allow the characteristics of time series data, including means, variances, and model parameters to change across regimes.</li>
<li>Assume that at any given time period there is a probability that the series may be in any of the regimes and may transition to a different regime. </li>
</ul>
<p>These components can allow regime change models to better capture the true behavior of real-world data than standard models.</p>
<h2 id="how-are-regime-change-models-different-than-structural-break-models">How Are Regime Change Models Different Than Structural Break Models?</h2>
<p>At first glance, it can be difficult to distinguish regime change models from <a href="https://www.aptech.com/structural-breaks/">structural break models</a>. They both allow for changes in the underlying model of time series data. However, there are distinct differences:</p>
<table>
 <thead>
 <tr>
      <th style="background-color: #36434C" colspan="3"><h3 id="structural-break-models-vs-regime-change-models"><span style="color:#FFFFFF">Structural Break Models vs. Regime Change Models</span></h3>
      </th>
   </tr>
 <tr><th>Regime Change Models</th><th>Structural Break Models</th></tr>
</thead>
<tbody>
<tr><td width="50%"><ul><li>Parameters vary across different regimes.</li><li>Finite number of regimes.</li><li>Regimes can be temporary and recurring.</li><li>Used to model effects of cyclically occurring changes in the economy.</li><li>Regime is unobserved and driven by stochastic process.</li></ul></td><td width="50%"><ul><li>Parameters change at different times.</li><li>Infinite number of structural changes.</li><li>Non-recurring and permanent shifts.</li><li>Usually used to model effects of permanent changes in economic structure.</li></ul></td></tr>
</tbody>
</table>
<p>In a way, we can think of structural change models as a very special case of regime change models, in which each possible &quot;regime&quot; occurs only once. </p>
<h2 id="when-should-you-use-regime-switching-models">When Should You Use Regime Switching Models?</h2>
<p>Regime switching models are most commonly used to model time series data that fluctuates between recurring &quot;states&quot;. Put another way, if we are working data that seems to cycle between periods of behavior, we may want to consider a regime switching model.</p>
<table>
 <thead>
 <tr>
      <th colspan="2"><h3 id="example-applications-of-regime-change-models">Example Applications of Regime Change Models</h3>
      </th>
   </tr>
 <tr><th>Underlying cause</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>Financial crises.</td><td>Many economic time series behave differently in times of financial stability than financial crisis.</td></tr>
<tr><td>Economic downturns.</td><td>The behavior of economic time series are characterized differently in periods of economic expansion versus economic recession.</td></tr>
<tr><td>Changes in tax policies.</td><td>Household behavior, such as income allocation between consumption and saving, changes depending on tax policy regimes. </td></tr>
<tr><td>Hyperinflation.</td><td>Economic fundamentals behave differently in periods of hyperinflation and "normal" rates of inflation.</td></tr>
</tbody>
</table>
<h2 id="the-markov-switching-model">The Markov-Switching Model</h2>
<p>The <a href="https://www.aptech.com/examples/tsmt/switchfit-gnp/">Markov-switching</a> model is a popular type of regime-switching model which assumes that unobserved states are determined by an underlying stochastic process known as a Markov-chain. </p>
<h3 id="what-is-a-markov-chain">What is a Markov-chain?</h3>
<p>A Markov-chain is a stochastic process used to describe how uncertain and unobserved outcomes occur. In the case of the Markov-switching model, it is used to describe how data falls into unobserved regimes. A Markov-chain has the property that future states are dependent only on present states (this is known as the Markov property). </p>
<div class="alert alert-info" role="alert">The Markov-chain used in Markov-switching models is the same concept used in <a href="https://www.aptech.com/blog/fundamental-bayesian-samplers/">Markov-Chain Monte Carlo sampling (MCMC)</a>.</div>
<p>A key characteristic of a Markov-chain is the transition probabilities. The transition probabilities describe the likelihood that the current regime stays the same or changes (i.e the probability that the regime transitions to another regime). </p>
<h3 id="the-components-of-the-markov-switching-model">The Components of the Markov-Switching Model</h3>
<p>The complete Markov-switching model includes:</p>
<ul>
<li>An assumed number of regimes.</li>
<li>A dependent variable. </li>
<li>Independent variables. </li>
<li>Parameters relating the dependent variable to the independent variables for each regime.</li>
<li>Transition probabilities. </li>
<li>Statistical inferences on the model parameters and the determined states.</li>
</ul>
<h3 id="how-are-markov-switching-models-estimated">How Are Markov-Switching Models Estimated</h3>
<p>Markov-switching models are usually estimated using:</p>
<ul>
<li><a href="https://www.aptech.com/blog/beginners-guide-to-maximum-likelihood-estimation-in-gauss/">Maximum likelihood estimation</a>. </li>
<li><a href="https://www.aptech.com/resources/tutorials/bayesian-fundamentals/">Bayesian estimation</a>. </li>
</ul>
<h2 id="maximum-likelihood-estimation-of-markov-switching-models">Maximum Likelihood Estimation of Markov-switching Models</h2>
<p>Maximum likelihood estimation of Markov-switching models utilizes an iterative algorithm known as <strong>expectation-maximization</strong>.  The expectation-maximization algorithm is data analysis for models where there is a latent (unobserved) variable in the model. This method was first proposed by <a href="https://econweb.ucsd.edu/~jhamilto/software.htm#Markov">John Hamilton in 1990</a>. </p>
<p>The expectations-maximization algorithm broadly involves two steps:</p>
<ol>
<li>Estimating the latent variable. This is known as the <strong>E-Step</strong>. </li>
<li>Estimating the parameters of the model given the value of the latent variable. This is known as the <strong>M-step</strong>.</li>
</ol>
<p>In the context of the Markov-Switching model, this means we:</p>
<ol>
<li>Use a filtering-smoothing algorithm, such as the <a href="https://www.aptech.com/resources/tutorials/tsmt/filtering-data-with-the-kalman-filter/">Kalman smoother</a>, to propose the path of the unobserved variable.</li>
<li>Use maximum likelihood, given the current regime, to estimate the model parameters, including the transition probabilities. </li>
<li>Repeat steps 1 &amp; 2 using updated parameter estimates until convergence. </li>
</ol>
<h2 id="bayesian-estimation-of-markov-switching-models">Bayesian Estimation of Markov-Switching Models</h2>
<p>Bayesian estimation of Markov-switching models relies on drawing samples from a joint distribution of the parameters, states, and transition probabilities using a <a href="https://www.aptech.com/blog/fundamental-bayesian-samplers/">Markov Chain Monte Carlo method (MCMC)</a>. This method benefits from the fact that the likelihood function for the model doesn't have to be directly calculated. </p>
<p>A common tool used for Bayesian estimation of the Markov-switching models is the <a href="https://www.aptech.com/resources/tutorials/bayesian-fundamentals/gibbs-sampling-from-a-bivariate-normal-distribution/">Gibbs sampler</a>. </p>
<h2 id="conclusion">Conclusion</h2>
<p>Congratulations! In today's blog, you learned the basics of the power Markov-switching model. After reading this blog, you should have a better understanding of:</p>
<ul>
<li>What a regime switching model is and how it differs from a structural break model. </li>
<li>When to use a regime switching model.</li>
<li>What a Markov-switching model is. </li>
<li>What tools we use to estimate Markov-switching models. </li>
</ul>
<p>    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></p>
<h2 id="try-out-the-gauss-time-series-mt-library">Try Out The GAUSS Time Series MT Library</h2>
[contact-form-7]

]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/introduction-to-markov-switching-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Panel Data Stationarity Test With Structural Breaks</title>
		<link>https://www.aptech.com/blog/panel-data-stationarity-test-with-structural-breaks/</link>
					<comments>https://www.aptech.com/blog/panel-data-stationarity-test-with-structural-breaks/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 02 Oct 2020 05:24:31 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Panel data]]></category>
		<category><![CDATA[structural breaks]]></category>
		<category><![CDATA[unit root]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=21878</guid>

					<description><![CDATA[Reliable unit root testing is an important step of any time series analysis or panel data analysis. 

However, standard time series unit root tests and panel data unit root tests aren’t reliable when structural breaks are present. Because of this, when structural breaks are suspected, we must employ unit root tests that properly incorporate these breaks. 

Today we will examine one of those tests, the Carrion-i-Silvestre, et al. (2005) panel data test for stationarity in the presence of multiple structural breaks.]]></description>
										<content:encoded><![CDATA[<p>    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></p>
<h3 id="introduction">Introduction</h3>
<p>The validity of many <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">time series models</a> and <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">panel data models</a> requires that the underlying data is stationary. As such, reliable <a href="https://www.aptech.com/why-gauss-for-unit-root-testing/" target="_blank" rel="noopener">unit root testing</a> is an important step of any time series analysis or panel data analysis. </p>
<p>However, standard time series unit root tests and panel data unit root tests aren’t reliable when <a href="https://www.aptech.com/structural-breaks/" target="_blank" rel="noopener">structural breaks</a> are present. Because of this, when structural breaks are suspected, we must employ unit root tests that properly incorporate these breaks. </p>
<p>Today we will examine one of those tests, the Carrion-i-Silvestre, et al. (2005) panel data test for stationarity in the presence of multiple structural breaks.</p>
<h2 id="why-panel-data-unit-root-testing">Why Panel Data Unit Root Testing?</h2>
<p>We may be tempted when working with panel data to treat the data as individual time-series, performing unit root testing on each one separately. However, one of the fundamental ideas of panel data is that there is a shared underlying component that connects the group. </p>
<p>It is this shared component, that suggests that there are advantages to be gained from testing the panel data collectively:</p>
<ul>
<li>Panel data contains more combined information and variation than pure time-series data or cross-sectional data.   </li>
<li>Collectively testing for unit roots in panels provides more power than testing individual series.  </li>
<li>Panel data unit root tests are more likely than time series unit root tests to have standard asymptotic distributions. </li>
</ul>
<p>Put simply, when dealing with panel data, using tests designed specifically for panel data and testing the panel collectively, can lead to more reliable results.</p>
<div class="alert alert-info" role="alert">For more background on unit root testing, see our previous blog post, <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/" target="_blank" rel="noopener">“How to Conduct Unit Root Tests in GAUSS”</a>.</div>
<h2 id="why-do-we-need-to-worry-about-structural-breaks">Why do we Need to Worry About Structural Breaks?</h2>
<p>It is important to properly address structural breaks when conducting unit root testing because most <strong>standard unit root tests will bias towards non-rejection</strong> of the unit root test. We discuss this in greater detail in our <a href="https://www.aptech.com/blog/unit-root-tests-with-structural-breaks/" target="_blank" rel="noopener">“Unit Root Tests with Structural Breaks”</a> blog.</p>
<h2 id="panel-data-stationarity-test-with-structural-breaks">Panel Data Stationarity Test with Structural Breaks</h2>
<p>The Carrion-i-Silvestre, <em>et al.</em> (2005) panel data stationarity test introduces a number of important testing features:</p>
<ul>
<li>Tests the null hypothesis of stationarity against the alternative of non-stationarity.  </li>
<li>Allows for multiple, unknown structural breaks.  </li>
<li>Accommodates shifts in the mean and/or trend of the individual time series.   </li>
<li>Does not require the same breaks across the entire panel but, rather, allows for each individual to have a different number of breaks at different dates.   </li>
<li>Allows for homogeneous or heterogeneous long-run variances across individuals.  </li>
</ul>
<div style="text-align:center;background-color:#37444d;padding-top:40px;padding-bottom:40px;"><span style="color:#FFFFFF">Deciding which unit root test is right for your data?</span> <a href="https://www.aptech.com/why-gauss-for-unit-root-testing/#ur_test_guide">Download our Unit Root Selection Guide!</a></div>
<h2 id="conducting-panel-data-stationarity-tests-in-gauss">Conducting Panel Data Stationarity Tests in GAUSS</h2>
<h3 id="where-can-i-find-the-tests">Where can I Find the Tests?</h3>
<p>The panel data stationarity test with structural breaks is implemented by the <a href="https://docs.aptech.com/gauss/tspdlib/docs/pd_kpss.html" target="_blank" rel="noopener"><code>pd_kpss</code></a> procedure in the GAUSS <a href="https://docs.aptech.com/gauss/tspdlib/docs/tspdlib-landing.html" target="_blank" rel="noopener">tspdlib</a> library. </p>
<p>The library can be directly installed using the <a href="https://www.aptech.com/blog/gauss-package-manager-basics/" target="_blank" rel="noopener">GAUSS Package Manager</a>. </p>
<h3 id="what-format-should-my-data-be-in">What Format Should my Data be in?</h3>
<p>The <code>pd_kpss</code> procedure takes panel data in wide format - this means that each column of your data matrix should contain the time series observations for a different individual in the panel. </p>
<p>For example, if we have 100 observations of real GDP for 3 countries, our test data will be 100 x 3 matrix.</p>
<table>
<thead>
<tr>
<th>Observation #</th>
<th>Country A</th>
<th>Country B</th>
<th>Country C</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1.11</td>
<td>1.40</td>
<td>1.39</td>
</tr>
<tr>
<td>2</td>
<td>1.14</td>
<td>1.37</td>
<td>1.34</td>
</tr>
<tr>
<td>3</td>
<td>1.27</td>
<td>1.45</td>
<td>1.28</td>
</tr>
<tr>
<td>4</td>
<td>1.19</td>
<td>1.51</td>
<td>1.35</td>
</tr>
<tr>
<td>$\vdots$</td>
<td>$\vdots$</td>
<td>$\vdots$</td>
<td>$\vdots$</td>
</tr>
<tr>
<td>99</td>
<td>1.53</td>
<td>1.75</td>
<td>1.65</td>
</tr>
<tr>
<td>100</td>
<td>1.68</td>
<td>1.78</td>
<td>1.67</td>
</tr>
</tbody>
</table>
<h3 id="how-do-i-call-the-test-procedure">How do I Call the Test Procedure?</h3>
<p>The first step to implementing the panel date stationarity test with structural breaks in GAUSS is to load the <code>tspdlib</code> library. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">library tspdlib;</code></pre>
<p>This statement provides access to all the procedures in the <code>tspdlib</code> libraries. After loading the libraries, the <code>pd_kpss</code> procedure can be called directly from the command line or within a program file. </p>
<p>The <code>pd_kpss</code> procedure takes 2 required inputs and 5 optional arguments:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">{ testd_hom, testd_het, m_lee_est, brks } = pd_kpss(y, model, 
                                                       nbreaks,
                                                       bwl,
                                                       varm, 
                                                       pmax, 
                                                       b_ctl);</code></pre>
<hr>
<dl>
<dt>y</dt>
<dd>$T \times N$ Wide form panel data to be tested.</dd>
<dt>model</dt>
<dd>Scalar, model to be used when there are structural breaks found:
<table>
<tbody>
<tr><td>1</td><td>Constant (Hadri test)</td></tr>
<tr><td>2</td><td>Constant + trend (Hadri test)</td></tr>
<tr><td>3</td><td>Constant + shift (in mean)</td></tr>
<tr><td>4</td><td>Constant + trend + shift (in mean and trend)</td></tr>
</tbody>
</table></dd>
<dt>nbreaks</dt>
<dd>Scalar, Optional input, number of breaks to consider (up to 5). Default = 5.</dd>
<dt>bwl</dt>
<dd>Scalar, Optional input, bandwidth for the spectral window. Default = round(4 * (T/100)^(2/9)).</dd>
<dt>varm</dt>
<dd>Scalar, Optional input, kernel used for long-run variance computation. Default = 1:
<table>
<tbody>
<tr><td>1</td><td>iid</td></tr>
<tr><td>2</td><td>Bartlett.</td></tr>
<tr><td>3</td><td>Quadratoc spectral (QS).</td></tr>
<tr><td>4</td><td>Sul, Phillips, and Choi (2003) with the Bartlett kernel.</td></tr>
<tr><td>5</td><td>Sul, Phillips, and Choi (2003) with quadratic spectral kernel.</td></tr>
<tr><td>6</td><td>Kurozumi with the Bartlett kernel.</td></tr>
<tr><td>7</td><td>Kurozumi with quadratic spectral kernel.</td></tr>
</tbody>
</table></dd>
<dt>pmax</dt>
<dd>Scalar, Optional input, denotes the number of maximum lags that is used in the estimation of the AR(p) model for lrvar. The final number of lags is chosen using the BIC criterion. Default = 8.</dd>
<dt>b_ctl</dt>
<dd>Optional input, An instance of the <code>breakControl</code> structure controlling the setting for the Bai and Perron structural break estimation.</dd>
</dl>
<hr>
<p>The <code>pd_kpss</code> procedure provides 4 returns :</p>
<hr>
<dl>
<dt>test_hom</dt>
<dd>Scalar, stationarity test statistic with structural breaks and homogeneous variance.</dd>
<dt>test_het</dt>
<dd>Scalar, stationarity test statistic with structural breaks and heterogeneous variance.</dd>
<dt>kpss_test</dt>
<dd>Matrix, individual tests. This matrix contains the test statistics in the first column, the number of breaks in the second column, the BIC chosen optimal lags, and the LWZ chosen optimal lags.</dd>
<dt>brks</dt>
<dd>Matrix of estimated breaks. Breaks for each individual group are contained in separate rows.
<hr></dd>
</dl>
<h2 id="empirical-example">Empirical Example</h2>
<p>Let’s look further into testing for panel data stationarity with structural breaks using an empirical example.</p>
<h3 id="data-description">Data Description</h3>
<p>The dataset contains government deficit as a percentage of GDP for nine OECD countries. The time span ranges from 1995 to 2019. This gives us a balanced panel of 9 individuals and 25 time observations each. </p>
<h3 id="loading-our-data-into-gauss">Loading our data into GAUSS</h3>
<p>Our first step is to load the data from <code>govt-deficit-oecd.csv</code> using <a href="https://docs.aptech.com/gauss/loadd.html" target="_blank" rel="noopener"><code>loadd</code></a>. This <code>.csv</code> file contains three variables, <code>Country</code>, <code>Year</code>, and <code>Gov_deficit</code>. </p>
<p>We will load all three variables into a <a href="https://www.aptech.com/blog/what-is-a-gauss-dataframe-and-why-should-you-care/" target="_blank" rel="noopener">GAUSS dataframe</a>. Note that <code>loadd</code> automatically detects that <code>Country</code> is a categorical variable, and assigns the <code>category</code> type. However, we will need to convert <code>Year</code> to a <a href="https://www.aptech.com/blog/dates-and-times-made-easy/" target="_blank" rel="noopener">date variable</a>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Load all variables and convert country to numeric categories
data = loadd("govt-deficit-oecd.csv");

// Convert "Year" to a date variable
data = asDate(data, "%Y", "Year");</code></pre>
<p>This loads our data in long format (a 225x1 dataframe). Our next step, is to convert this to wide-format using the <a href="https://docs.aptech.com/gauss/dfwider.html" target="_blank" rel="noopener"><code>dfWider</code></a> procedure. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Specify names_from column 
names_from = "Country";

// Specify values_from column
values_from = 
// Convert from long to wide format
wide_data = df(data);</code></pre>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Delete first column which contains the year variable
govt_def = delcols(wide_data, 1);</code></pre>
<h3 id="setting-up-our-model-parameters">Setting up our Model Parameters</h3>
<p>With our loading and transformations complete, we are ready to set-up our testing parameters. For this test, we will allow for both a constant and trend.
All other parameters will be kept at their default values. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Specify which model to 
// Allow for both constant and trend.
model = 2;</code></pre>
<h3 id="calling-the-pd_kpss-procedure">Calling the <code>pd_kpss</code> Procedure</h3>
<p>Finally, we call the <code>pd_kpss</code> procedure:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">{ test_hom, test_het, kpss_test, brks } = pd_kpss(wide_data, model);</code></pre>
<h2 id="empirical-results">Empirical Results</h2>
<p>The <code>pd_kpss</code> output includes:</p>
<ul>
<li>A header describing the testing settings. </li>
<li>The <code>test_hom</code> and <code>test_het</code> test statistics along with associated p-values.</li>
<li>The critical values for both test statistics.  </li>
<li>The testing conclusions based on a comparison of the test statistics to the associated critical values. </li>
</ul>
<pre>Test:                                                PD KPSS
Ho:                                             Stationarity
Number of breaks:                                       None
LR variance:                                             iid
Model:                                Break in level &amp; trend
==============================================================
                                      PD KPSS          P-val

Homogenous                             14.352          0.000
Heterogenous                           10.425          0.000

Critical Values:
                            1%             5%            10%

Homogenous               2.326          1.645          1.282
Heterogenous             2.326          1.645          1.282
==============================================================

Homogenous var:
Reject the null hypothesis of stationarity at the 1% level.

Heterogenous var:
Reject the null hypothesis of stationarity at the 1% level.</pre>
<p>These results tell us that we can reject the null hypothesis of stationarity at the 1% level for both cases, homogenous and heterogenous variance.</p>
<p>The test results also include a table of individual test results and conclusions:</p>
<pre>==============================================================
Individual panel results
==============================================================
                                         KPSS    Num. Breaks

AUT                                     0.165          2.000
DEU                                     0.079          0.000
ESP                                     0.249          4.000
FRA                                     0.210          2.000
GBR                                     0.298          2.000
IRL                                     0.235          2.000
ITA                                     0.130          3.000
LUX                                     0.127          3.000
NOR                                     0.414          1.000

Critical Values:
                            1%             5%            10%

AUT                      0.059          0.048          0.043
DEU                      0.207          0.150          0.122
ESP                      0.035          0.031          0.028
FRA                      0.056          0.045          0.040
GBR                      0.058          0.046          0.041
IRL                      0.074          0.059          0.051
ITA                      0.055          0.045          0.041
LUX                      0.058          0.045          0.039
NOR                      0.083          0.066          0.058
==============================================================

AUT                                     Reject Ho ( 1% level)
DEU                                          Cannot reject Ho
ESP                                     Reject Ho ( 1% level)
FRA                                     Reject Ho ( 1% level)
GBR                                     Reject Ho ( 1% level)
IRL                                     Reject Ho ( 1% level)
ITA                                     Reject Ho ( 1% level)
LUX                                     Reject Ho ( 1% level)
NOR                                     Reject Ho ( 1% level)
==============================================================</pre>
<p>Finally, the <code>pd_kpss</code> procedure prints the estimated breakpoints for each individual in the panel.</p>
<pre>Group        Break 1      Break 2      Break 3      Break 4      Break 5<br />
AUT          2003         2008         .            .            .<br />
DEU          .            .            .            .            .<br />
ESP          1999         2006         2009         2012         .<br />
FRA          2001         2008         .            .            .<br />
GBR          2000         2008         .            .            .<br />
IRL          2007         2010         .            .            .<br />
ITA          1997         2006         2009         .            .<br />
LUX          1999         2004         2008         .            .<br />
NOR          2008         .            .            .            .            </pre>
<div class="alert alert-info" role="alert">For more information on how to view the matrices returned by <code>pd_kpss</code> see our <a href="https://www.aptech.com/resources/tutorials/introduction-to-gauss-viewing-data-in-gauss/" target="_blank" rel="noopener">data viewing tutorial</a>.</div>
<h2 id="interpreting-the-results">Interpreting the Results</h2>
<p>When interpreting the results from <code>pd_kpss</code> test, it helps to remember a few key things:</p>
<ul>
<li>The test considers the null hypothesis of <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/#what-is-a-stationary-time-series" target="_blank" rel="noopener">stationarity</a> against the alternative of non-stationarity.</li>
<li>We reject the null hypothesis of stationarity at 
<ul>
<li>Large values of the test statistic. </li>
<li>Small p-values. </li>
</ul></li>
</ul>
<p>Notice that the TSPDLIB library conveniently provides interpretations for the <code>pd_kpss</code> tests. </p>
<h3 id="panel-data-test-statistic">Panel Data Test Statistic</h3>
<p>The test statistic for our panel, assuming homogeneous variances:</p>
<ul>
<li>Is equal to 14.352 with a p-value of 0.0000.</li>
<li>Suggests that we reject the null hypothesis of stationarity at the 1% level. </li>
</ul>
<p>The test statistic for our panel, assuming heterogeneous variances:</p>
<ul>
<li>Is equal to 10.425 with a p-value of 0.000. </li>
<li>Suggests that we reject the null hypothesis of stationarity at the 1% level.</li>
</ul>
<p>These results tell us that regardless of whether we assume heterogeneous or homogenous variances, we can reject the null hypothesis of stationarity for the panel. Given this, we must make proper adjustments to account for non-stationarity when modeling our data. </p>
<h3 id="individual-test-results">Individual Test Results</h3>
<p><a href="https://www.aptech.com/wp-content/uploads/2020/09/pankpss-graph-spanish-1.jpeg"><img src="https://www.aptech.com/wp-content/uploads/2020/09/pankpss-graph-spanish-1.jpeg" alt="Panel data stationarity test with structural breaks. " width="75%" height="75%" class="aligncenter size-full wp-image-11580083" /></a></p>
<table>
<th>Country</th><th>Statistic</th><th>Breaks</th><th>Conclusion</th>
<tbody>
<tr><td>Austria</td><td>0.165</td><td>2003;2008</td><td>Reject null at 1%.</td></tr> 
<tr><td>France</td><td>0.210</td><td>2001;2008</td><td>Reject null at 1%.</td></tr>
<tr><td>Germany</td><td>0.079</td><td>None</td><td>Cannot reject null.</td></tr>
<tr><td>Ireland</td><td>0.235</td><td>2007;2010</td><td>Reject null at 1%.</td></tr>
<tr><td>Italy</td><td>0.130</td><td>1997;2006;2009</td><td>Reject null at 1%.</td></tr>
<tr><td>Luxemberg</td><td>0.127</td><td>1999;2004;2008</td><td>Reject null at 1%.</td></tr>
<tr><td>Norway</td><td>0.414</td><td>2008</td><td>Reject null at 1%.</td></tr>
<tr><td>Spain</td><td>0.249</td><td>1999;2006;2009;2012</td><td>Reject null at 1%.</td></tr>
<tr><td>United Kingdom</td><td>0.298</td><td>2000;2008</td><td>Reject null at 1%.</td></tr>

</tbody>
</table>
<h2 id="conclusion">Conclusion</h2>
<p>Todays's blog considers the panel data stationarity test proposed by Carrion-i-Silvestre, et al. (2005). This test is built upon two crucial aspects of unit root testing:</p>
<ul>
<li>Panel data specific tests should be used with panel data.</li>
<li>Structural breaks should be accounted for.</li>
</ul>
<p>Ignoring these two facts can result in unreliable results. </p>
<p>After today, you should have a stronger understanding of how to implement the panel data stationarity test with structural breaks in GAUSS and how to interpret the results. </p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/panel-data-structural-breaks-and-unit-root-testing/" target="_blank" rel="noopener">Panel data, structural breaks and unit root testing</a></li>
<li><a href="https://www.aptech.com/blog/panel-data-basics-one-way-individual-effects/" target="_blank" rel="noopener">Panel Data Basics: One-way Individual Effects</a></li>
<li><a href="https://www.aptech.com/blog/how-to-aggregate-panel-data-in-gauss/" target="_blank" rel="noopener">How to Aggregate Panel Data in GAUSS</a></li>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">Introduction to the Fundamentals of Panel Data</a></li>
<li><a href="https://www.aptech.com/blog/transforming-panel-data-to-long-form-in-gauss/" target="_blank" rel="noopener">Transforming Panel Data to Long Form in GAUSS</a></li>
<li><a href="https://www.aptech.com/blog/get-started-with-panel-data-in-gauss-video/" target="_blank" rel="noopener">Getting Started With Panel Data in GAUSS </a></li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/panel-data-stationarity-test-with-structural-breaks/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Running publicly available GAUSS code: Part 2</title>
		<link>https://www.aptech.com/blog/running-publicly-available-gauss-code-part-2/</link>
					<comments>https://www.aptech.com/blog/running-publicly-available-gauss-code-part-2/#respond</comments>
		
		<dc:creator><![CDATA[aptech]]></dc:creator>
		<pubDate>Fri, 08 Feb 2019 18:05:00 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[cointegration]]></category>
		<category><![CDATA[structural breaks]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=19462</guid>

					<description><![CDATA[This week's blog brings you the second video in the series examining running publicly available GAUSS code. This video runs the popular code by Hatemi-J for testing cointegration with multiple structural breaks. In this video you will learn how to:

<ul>
<li> Substitute your own dataset.</li>
<li> Modify the indexing commands for your data.</li>
<li>Remove missing values.</li>
<li> Preview your data after loading with the <b>Ctrl+E</b> keyboard shortcut.</li>
</ul>
]]></description>
										<content:encoded><![CDATA[<h3 id="hatemi-code-for-cointegration-with-multiple-structural-breaks">Hatemi code for cointegration with multiple structural breaks</h3>
<p> </p>
<div style="border-style: solid; border-color: #36434c; border-width: 5px;">
<iframe width="800" height="454" src="https://www.youtube.com/embed/covQ86yDbAg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
<br/>
<p>This week's blog brings you the second video in the series examining running publicly available GAUSS code. This video runs the popular code by <a href="https://ideas.repec.org/c/boc/bocode/g00006.html">Hatemi-J</a> for testing cointegration with multiple <a href="https://www.aptech.com/structural-breaks/">structural breaks</a>. In this video you will learn how to:</p>
<ul>
<li>Substitute your own dataset.</li>
<li>Modify the indexing commands for your data.</li>
<li>Remove missing values.</li>
<li>Preview your data after loading with the <em>Ctrl+E</em> keyboard shortcut.</li>
</ul>
<p><strong>Previous</strong>: <a href="https://www.aptech.com/blog/running-publicly-available-gauss-code-part-1/">Running Public GAUSS Code: Part 1</a></p>
<h2 id="script">Script</h2>
<h3 id="introduction">Introduction</h3>
<p>Hello and welcome to the second part of our series on running publicly available GAUSS code. Last time we showed how to set up a project folder, run a program and resolve the <code>library not found</code> and <code>file not found</code> errors.</p>
<p>Today we will run the code which implements the tests for cointegration with two unknown structural breaks from Dr. Hatemi’s 2008 paper in <em>Empirical Economics</em>,  using a couple of variables from the famous Nelson-Plosser dataset.</p>
<h3 id="starting-point">Starting point</h3>
<p>We have saved the program and data in a folder named Hatemi under the GAUSS-Projects folder which we created in the first video in this series. If you are not sure how to do that, go back and take a look at the first video. </p>
<h3 id="code-layout">Code layout</h3>
<p>Let’s start by looking at how this file is laid out. It is split into two main sections. The first part of the file, up to line 22, is responsible for loading data and calling the main procedure, which is in this case literally called main. This is the section that we will need to modify.</p>
<p>This second section of the code contains all of the procedures that Dr. Hatemi wrote to compute the algorithm from his paper.</p>
<p>The <code>end</code> command on line 22 is the line of demarcation between these two sections.</p>
<p>GAUSS will not run any commands located after <code>end</code>, but it will still find the procedures so they will be available for use by the code in the first section. </p>
<h3 id="data-loading-code">Data loading code</h3>
<p>Let’s look at this first section to see what it is doing. The action starts on line 9 with the <code>load</code> command.</p>
<p>This statement is going to:</p>
<ol>
<li>Look for a file named <code>b22.txt</code> in your current working directory.</li>
<li>Load the data it finds in that fil.</li>
<li>Reshape the data to be a matrix with <code>obs</code> rows and <code>var</code> columns.</li>
</ol>
<p>We’re going to have to change this line because our data is not in <code>b22.txt</code> and we don’t want to have to specify the data size ahead of time, as we see here with <code>obs</code> and <code>var</code>.</p>
<h3 id="our-dataset">Our dataset</h3>
<p>Our data is in a Stata dataset named <code>nelsonplosser.dta</code>. The <code>loadd</code> command allows us to load data from many different types of datasets, including Stata datasets and to specify the model variables by name. So we will use <code>loadd</code>.</p>
<h3 id="dataset-preview">Dataset preview</h3>
<p>Before we load our data, let’s get a preview of our dataset by double-clicking it in the Project Folder’s Window. Let’s scroll to the right to see all of the variables.</p>
<p>The yellow cells with dots contain missing values. We will deal with those after loading the data.</p>
<p>For this video our dependent variable will be bond yield which is the <code>bnd</code> variable and our independent variable will be the money supply, which is just <code>m</code>.</p>
<h3 id="load-preview-and-index-our-data">Load, preview and index our data</h3>
<p>Let’s use the ‘loadd’ command to load our two variables into ‘z’. (add call-out showing viewers where to get more information about ‘loadd’.)</p>
<p>Now we see that lines 10 and 11 are splitting up the variables that we loaded from our dataset.</p>
<p>The intention of line 10 is to select all observations from the first column of <code>z</code>. However, we have a problem because <code>obs</code> has not yet been given a value. We could add a line setting ‘obs’ to be equal to the rows of <code>z</code>. That would be correct. </p>
<p>However, we can replace <code>1:obs</code> with the dot operator. This is shorter and makes it clear that our intention is to load all the rows.</p>
<p>We see the same problem in line 11 with the row range and also with the column range. For now, we can just change the column index to two, because <code>z</code> will only have two columns.</p>
<p>The remaining lines create two variables, <code>obs</code> and <code>n</code> which do not seem to be used and then there is the call to main. We see that the only variables passed into <code>main</code> are <code>y</code> and <code>x</code> which we have created.</p>
<p>Let’s add an <code>end</code> command just before the call to <code>main</code> and then run our code so we can verify that our data is loaded correctly.</p>
<p>We’ll open <code>z</code>, <code>y</code> and <code>x</code> in floating symbol editors by clicking on them and using the <code>Ctrl+E</code> hotkey.</p>
<h3 id="remove-missing-values">Remove missing values</h3>
<p>As we mentioned earlier, the dots represent missing values. The code won’t work if the data contains missing values, so we will have to remove them.</p>
<p>The <code>packr</code> command removes all rows in which any element contains a missing value. If we add <code>packr</code> right after we load the data, the first 40 rows should be trimmed off, leaving us with <code>z</code> as a <code>71x2</code> matrix.</p>
<h3 id="run-the-code-and-view-output">Run the code and view output</h3>
<p>Now that we have confirmed that our data is correct, let’s remove the ‘end’ command that we added and run the full code.</p>
<p>We have successfully run the code. Here is our output.</p>
<h3 id="conclusion">Conclusion</h3>
<p>Thank you for watching! I hope this has helped you to become more familiar with GAUSS. Let us know what content you would like to see. Please post your comments and questions below!</p>
<h2 id="references">References</h2>
<p>Abdulnasser Hatemi-J, (2008),
<A HREF="https://ideas.repec.org/a/spr/empeco/v35y2008i3p497-505.html">Tests for cointegration with two unknown regime shifts with an application to financial market integration</A>,
<i>Empirical Economics</i>, Springer, vol. 35(3), p. 497-505.</p>
<p>Nelson, Charles and Plosser, Charles, (1982),Trends and random walks in macroeconmic time series: Some evidence and implications, <i>Journal of Monetary Economics</i>, vol. 10(2), p. 139-162.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/running-publicly-available-gauss-code-part-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A Simple Test for Structural Breaks in Variance</title>
		<link>https://www.aptech.com/blog/a-simple-test-for-structural-breaks-in-variance/</link>
					<comments>https://www.aptech.com/blog/a-simple-test-for-structural-breaks-in-variance/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 30 Nov 2018 21:02:11 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[ICSS]]></category>
		<category><![CDATA[structural breaks]]></category>
		<category><![CDATA[time series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=17073</guid>

					<description><![CDATA[Though many standard econometric models assume that variance is constant, structural breaks in variance are well-documented, particularly in economic and finance data. If these changes are not accurately accounted for, they can hinder forecast inference measures, such as forecast variances and intervals. In this blog, we consider a tool that can be used to help locate structural breaks in variance -- the iterative cumulative sum of squares algorithm(ICSS) (Inclan and Tiao, 1994).]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>Though many standard econometric models assume that variance is constant, <a href="https://www.aptech.com/structural-breaks/">structural breaks</a> in variance are well-documented, particularly in economic and finance data. If these changes are not accurately accounted for, they can hinder forecast inference measures, such as forecast variances and intervals.</p>
<p>In this blog, we consider a tool that can be used to help locate <a href="https://www.aptech.com/structural-breaks/">structural breaks</a> in variance -- the <a href="https://www.jstor.org/stable/2290916?seq=1#page_scan_tab_contents">iterative cumulative sum of squares algorithm</a> (ICSS) (Inclan and Tiao, 1994).</p>
<h2 id="centered-cumulative-sum-of-squares">Centered Cumulative Sum of Squares</h2>
<p>The first step of the algorithm is to find the cumulative sum of squares test statistic. This statistic can be found in four easy steps:
    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></p>
<ol>
<li>Using the model error terms find the cumulative sum of squares (CSS) for all potential breakpoints observations 1 through <em>k</em> :
$$C_1 = \sum_{t=1}^1 a_t^2 \\ C_2 = \sum_{t=1}^2 a_t^2 \\ \vdots \\ C_k = \sum_{t=1}^k a_t^2 \\ C_T = \sum_{t=1}^T a_t^2$$
 
$$ a = \begin{bmatrix} 0.1 \\ 0.3 \\ 0.5 \\ 0.7 \\ 0.2 \end{bmatrix} \rightarrow C_{k=1,2,..T} = \begin{bmatrix} 0.01 \\ 0.1 \\ 0.35 \\ 0.84 \\ 0.88 \end{bmatrix} $$
 </li>
<li>Normalize and center the cumulative sum of squares, using the partial series CSS , $C_k$, and the full series CSS, $C_T$ :
$$D_k = \frac{C_k}{C_T} - \frac{k}{T}\\$$
 
$$D_{k=1,2,..T} = \begin{bmatrix} 0.01/0.88 \\ 0.1/0.88 \\ 0.35/0.88 \\ 0.84/0.88 \\ 0.88/0.88 \end{bmatrix} - \begin{bmatrix} 1/5 \\ 2/5 \\ 3/5 \\ 4/5 \\ 5/5 \end{bmatrix} = \begin{bmatrix} -0.189\\ -0.286 \\ -0.202\\ 0.155 \\ 0.000 \end{bmatrix} $$
 </li>
<li>Find the maximum centered cumulative sum of squares. The potential breakpoint, k*, is the location in the series of the maximum absolute value of the centered cumulative sum of squares, $D_{k^{*}}$ :
$$D_{k^{*}} = \max\limits_{k} | D_k |\\$$
 
$$ \begin{bmatrix} abs(-0.189)\\ abs(-0.286) \\ abs(-0.202)\\ abs(0.155)\\ abs(0.000) \end{bmatrix} \rightarrow D_{k^{*}} = 0.286, k^{*} = 2 $$
 </li>
<li>Finally, if $IT = \sqrt{T/2}D_{k^{*}}$ exceeds the critical value of the limiting distribution, then $k^*$ represents a statistically significant breakpoint.</li>
</ol>
<h2 id="iterative-cumulative-sum-of-squares-algorithm">Iterative Cumulative Sum of Squares Algorithm</h2>
<p>Finding $k^*$ would be sufficient if we were certain that there was only one break point in the variance. However, how can we definitively know that this is the case?</p>
<p>Because we cannot eliminate the possibility that there are multiple breaks in the variance, we must iteratively search for all potential additional breakpoints.</p>
<p>The ICSS algorithm searches for breakpoints in each of the sections created by newly found breakpoints. Once new breakpoints are no longer found, the search stops.</p>
<p>Considering our hypothetical example, our first potential break was located at $k^{*} = 2$, therefore, our iterative search for breakpoints would begin again in the section of data bounded by $t = 3$ and $t = 5$.</p>
<h2 id="using-the-icss-algorithm-in-gauss">Using the ICSS algorithm in GAUSS</h2>
<p>The ICSS test is available in the GAUSS <a href="https://github.com/aptech/gauss-carrion-library" target="_blank" rel="noopener">carrionlib</a> package. This package is a free package that should be installed using the <a href="https://www.aptech.com/blog/gauss-package-manager-basics/" target="_blank" rel="noopener">GAUSS Package Manager</a>.</p>
<div class="alert alert-info" role="alert">For more information about using GAUSS library, please see our blog, <a href="https://www.aptech.com/blog/using-gauss-packages-complete-guide/" target="_blank" rel="noopener">&quot;Using GAUSS Packages [Complete Guide]&quot;</a>.</div>
<h3 id="the-icss-procedure">The icss Procedure</h3>
<p>The carrionlib package includes the <code>icss</code> procedure for implementing the ICSS test. Because the <code>icss</code> function also provides options to perform the modifications discussed in the paper, the <code>icss</code> function requires the following three inputs:</p>
<hr />
<dl>
<dt>e</dt>
<dd>Vector, the stochastic series to be tested.</dd>
<dt>test</dt>
<dd>Scalar, an indicator of which test to run, should always be set to zero to run the standard ICSS test.</dd>
<dt>cri</dt>
<dd>Vector, 3x1, sets the bandwidth for the modification models. This is irrelevant to the standard ICSS and can be set to any 3x1 vector.</dd>
</dl>
<hr />
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">{ cp, nbre } = icss(e, test, cri);</code></pre>
<h2 id="empirical-example">Empirical Example</h2>
<p>Let's use quick empirical example to get a better understanding of how to use the <code>icss</code> procedure. </p>
<p><a href="https://www.aptech.com/wp-content/uploads/2018/11/gblog-sp500-icss-test.png"><img class="aligncenter size-full wp-image-18561" src="https://www.aptech.com/wp-content/uploads/2018/11/gblog-sp500-icss-test.png" alt="ICSS structural break test." width="800" height="400" /></a></p>
<p>Today, we'll use the S&amp;P 500 data provided by <a href="http://www.aefin.es/articulos/pdf/A4-2_443809.pdf">Sanso, Arago, and Carrion-i-Silvestre, 2004</a> to demonstrate how to use the <code>icss</code> procedure.</p>
<p>The first step is to load the library and load our data using the <a href="https://docs.aptech.com/gauss/loadd.html" target="_blank" rel="noopener"><code>loadd</code></a> procedure: </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Load library
new;
library carrionlib;

// Load S&amp;P data
x = loadd("sp.dat");</code></pre>
<p>Next, to prepare our data for testing, we'll demean our data:</p>
<pre class="hljs-container hljs-container-solo"><code>// Demean data
e = x - meanc(x);</code></pre>
<p>Finally we are ready to run our test:</p>
<pre class="hljs-container hljs-container-solo"><code>// Set our test to run ICSS
test = 0;

// Set cri vector to be any
// 3x1 vector
cri = 0|0|0;

// Run
{ cp, nbre } = icss(e, test, cri);</code></pre>
<p>The function returns two outputs:</p>
<ol>
<li>A vector containing the change points (<em>cp</em>).</li>
<li>A scalar containing the number of break points (<em>nbre</em>).</li>
</ol>
<h2 id="conclusion">Conclusion</h2>
<p>Identifying structural breaks in the variance of data is an important step in modeling time series data. In this tutorial we've covered:</p>
<ul>
<li>What the ICSS algorithm is.</li>
<li>How to use the ICSS algorithm in GAUSS.</li>
</ul>
<p>Code and data from this blog can be found <a href="https://github.com/aptech/gauss_blog/tree/master/time_series/icss-11.30.2018">here</a>.</p>
<h2 id="references">References</h2>
<p>Inclan, C., &amp; Tiao, G. C. (1994). Use of cumulative sums of squares for retrospective detection of changes of variance. <em>Journal of the American Statistical Association, 89</em>(427), 913-923.</p>
<p>Sansó, A., Aragó, V., &amp; Carrion, J. L. (2004). Testing for changes in the unconditional variance of financial time series. <em>Revista de Economía financiera, 4</em>(1), 32-53.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/a-simple-test-for-structural-breaks-in-variance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Why the GLS-unit root test with multiple structural breaks?</title>
		<link>https://www.aptech.com/blog/the-changing-trend-in-home-values/</link>
					<comments>https://www.aptech.com/blog/the-changing-trend-in-home-values/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Wed, 25 Jul 2018 21:03:18 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[housing]]></category>
		<category><![CDATA[structural breaks]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=17088</guid>

					<description><![CDATA[Many estimations and forecasting methods are not valid if the mean and variance are not constant across time. Today we examine how to test for both using GLS-unit root tests with multiple structural breaks.]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>The housing market has been on a roller coaster ride since the early 2000s. We have seen the rise and fall of one bubble and it is feared by some that we may be witnessing a second bubble today. </p>
<p>If we want to study housing prices, an important first step is to check and see if the data is trend stationary. This is because many estimation and forecasting methods are not valid if the mean and variance are not constant across time.</p>
<p>Today we will answer that question for two housing markets, Washington and Arizona, using the <a href="https://www.jstor.org/stable/40388611?seq=1#page_scan_tab_contents">Carrion-i-Silvestre, Kim, and Perron (2009)</a> GLS-unit root tests with multiple structural breaks.  </p>
<h2 id="the-data">The data</h2>
<p>We will use two different measures of housing prices: </p>
<ol>
<li>The historic <a href="https://www.zillow.com/research/data/">Zillow Home Value Index</a> in Phoenix and Seattle (Zillow Market Summary can be found in Table 1). </li>
<li>The <a href="https://www.fhfa.gov/DataTools/Downloads/Pages/House-Price-Index-Datasets.aspx">Federal Housing Finance Agency's House Price Index Dataset</a> in Arizona and Washington. </li>
</ol>
<p>Table 1: Zillow Housing Market Summary</p>
<table>
<thead>
<tr>
<th></th>
<th>Seattle, Washington</th>
<th>Phoenix, Arizona</th>
</tr>
</thead>
<tbody>
<tr>
<td>Zillow Home Value Index</td>
<td>$764,700</td>
<td>$230,100</td>
</tr>
<tr>
<td>Zillow Market Temperature</td>
<td>Hot</td>
<td>Cool</td>
</tr>
<tr>
<td>Median Listing Price</td>
<td>$729,950</td>
<td>$285,000</td>
</tr>
<tr>
<td>Median Sale Price</td>
<td>$719,100</td>
<td>$232,900</td>
</tr>
<tr>
<td>Forecasted 1-yr Growth</td>
<td>6.8%</td>
<td>3.6%</td>
</tr>
</tbody>
</table>
<p>As posted by 7/16/2018.</p>
<h2 id="the-gls-unit-root-test-with-multiple-structural-breaks">The GLS-Unit Root Test with Multiple Structural Breaks</h2>
<p>The <a href="https://www.jstor.org/stable/40388611?seq=1#page_scan_tab_contents">Carrion-i-Silvestre, Kim, and Perron (2009)</a> GLS-unit root tests with multiple structural breaks (GLS-MSBUR) makes a number of important contributions to the unit root testing literature: </p>
<ol>
<li>It allows for multiple breaks under both the null and alternative hypothesis which results in a unit root test that is not sensitive to the size of structural breaks (Carrion-i-Silvestre, Kim, and Perron, 2009). </li>
<li>Unlike other tests, the GLS-MSBUR test allows for <em>multiple</em> <a href="https://www.aptech.com/structural-breaks/">structural breaks</a>. This is particularly applicable in the case of housing price markets which have had multiple potential structural breaks. </li>
<li>The GLS-MSBUR test implements GLS detrending which has been shown to improve the power of unit root test.</li>
</ol>
<h2 id="housing-market-dynamics">Housing Market Dynamics</h2>
<h3 id="unit-root-testing">Unit Root Testing</h3>
<p>We ran a standard ADF unit root test and the GLS-MSBUR tests on our housing data. The results are in Table 2, below.</p>
<p><strong>Table 2: Unit root test results</strong></p>
<table>
<thead>
<tr>
<th></th>
<th>Washington</th>
<th>Seattle</th>
<th>Arizona</th>
<th>Phoenix</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADF without structural breaks</td>
<td>0.6646<br>(-3.155)</td>
<td>4.991<br>(-3.135)</td>
<td>-1.315<br>(-3.135)</td>
<td>-0.201<br>(-3.135)</td>
</tr>
<tr>
<td>ADF with structural breaks</td>
<td>-1.044<br>(-4.139)</td>
<td>1.340<br>(-4.076)</td>
<td>-3.12<br>(4.083)</td>
<td>-0.513<br>(-4.120)</td>
</tr>
</tbody>
</table>
<p>Note: 5% critical values in parentheses</p>
<p>For all cases without structural breaks, the test statistic exceeds the 5% critical values. This implies that we are unable to reject the null hypothesis of unit roots in any of our data. </p>
<p>After accounting for structural breaks in both the level and slope of the time trend, our ADF still show that we are unable to reject the null hypothesis of unit roots in all cases. Knowing this paves the way for us to more accurately model housing price dynamics. </p>
<h3 id="structural-break-identification">Structural Break Identification</h3>
<p>Based on our unit root test results, our model of housing prices should incorporate structural breaks. The GLS-MSBUR test conveniently estimates the timing of the breakpoints in our housing series. </p>
<p>We consider the case of three breaks points for each of the series. The estimated breaks are imposed on the original data series in the graph below and summarized in Table 3. </p>
<p><a href="https://www.aptech.com/wp-content/uploads/2018/07/sb_housing_azwa.png"><img src="https://www.aptech.com/wp-content/uploads/2018/07/sb_housing_azwa.png" alt="Structural breaks in housing price data." width="640" height="320" class="aligncenter size-full wp-image-17390" /></a>
<a href="https://www.aptech.com/wp-content/uploads/2018/07/sb_housing_phxsea.png"><img src="https://www.aptech.com/wp-content/uploads/2018/07/sb_housing_phxsea.png" alt="Structural breaks in housing data." width="640" height="320" class="aligncenter size-full wp-image-17391" /></a></p>
<p><strong>Table 3: Structural break estimates</strong></p>
<table>
<thead>
<tr>
<th></th>
<th>Washington</th>
<th>Seattle</th>
<th>Arizona</th>
<th>Phoenix</th>
</tr>
</thead>
<tbody>
<tr>
<td>Break One</td>
<td>2000-Q2</td>
<td>February, 2004</td>
<td>2000-Q4</td>
<td>July, 1999</td>
</tr>
<tr>
<td>Break Two</td>
<td>2007-Q2</td>
<td>May, 2006</td>
<td>2006-Q3</td>
<td>March, 2006</td>
</tr>
<tr>
<td>Break Three</td>
<td>2011-Q1</td>
<td>September, 2008</td>
<td>2009-Q3</td>
<td>September, 2009</td>
</tr>
</tbody>
</table>
<h2 id="conclusions">Conclusions</h2>
<p>Properly distinguishing structural breaks from unit roots is crucial to valid estimation and forecasting. Home prices provide an excellent case study of the importance of identifying <a href="https://www.aptech.com/structural-breaks/">structural breaks</a> when testing for unit roots. </p>
<p>We have shown that the standard ADF unit root tests fails to reject the unit root for home price measures across a number of markets. However, when we allow for the presence of unit roots we are able to unanimously reject the null hypothesis of unit roots. </p>
<p>Code and data from this blog can be found <a href="https://github.com/aptech/gauss_blog/tree/master/time_series/gls-msbur-7.25.18">here</a>.</p>
<h2 id="references">References</h2>
<p>Carrion-i-Silvestre, J., Kim, D., &amp; Perron, P. (2009). GLS-Based Unit Root Tests with Multiple Structural Breaks under Both the Null and the Alternative Hypotheses. <em>Econometric Theory, 25</em>(6), 1754-1792. Retrieved from <a href="http://www.jstor.org/stable/40388611"><a href="http://www.jstor.org/stable/40388611">http://www.jstor.org/stable/40388611</a></a></p>
    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/the-changing-trend-in-home-values/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>The Effects of Structural Breaks on GMM models</title>
		<link>https://www.aptech.com/blog/the-effects-of-structural-breaks-on-gmm-models/</link>
					<comments>https://www.aptech.com/blog/the-effects-of-structural-breaks-on-gmm-models/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Wed, 25 Jul 2018 04:26:30 +0000</pubDate>
				<category><![CDATA[Panel data]]></category>
		<category><![CDATA[Time Series]]></category>
		<category><![CDATA[structural breaks]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=17106</guid>

					<description><![CDATA[While structural breaks are a widely examined topic in pure time series, their impacts on panel data models have garnished less attention. 

However, in their forthcoming paper <a href="https://www.researchgate.net/publication/254406113_The_Difference_System_and_&#039;Double-D&#039;_GMM_Panel_Estimators_in_the_Presence_of_Structural_Breaks">Chowdhury and Russell (2018)]</a>  demonstrate that <a href="https://www.aptech.com/structural-breaks/">structural breaks</a> can cause bias in the instrumental variable panel estimation framework. 

This work highlights that structural breaks shouldn't be limited to pure time series models and warrant equal attention in panel data models.

]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>While structural breaks are a widely examined topic in pure <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">time series</a>, their impacts on <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">panel data models</a> have garnished less attention. </p>
<p>However, in their forthcoming paper <a href="https://www.researchgate.net/publication/254406113_The_Difference_System_and_'Double-D'_GMM_Panel_Estimators_in_the_Presence_of_Structural_Breaks" target="_blank" rel="noopener">Chowdhury and Russell</a> demonstrate that <a href="https://www.aptech.com/structural-breaks/" target="_blank" rel="noopener">structural breaks</a> can cause bias in the instrumental variable panel estimation framework. </p>
<p>This work highlights that structural breaks shouldn't be limited to pure time series models and warrant equal attention in panel data models.</p>
<h2 id="the-model">The Model</h2>
<p>For simplicity, consider the same AR(1) dynamic panel data model used by Chowdhury and Russell</p>
<p>$$y_{it} = \alpha y_{it-1} + \eta_i + \nu_{it}$$</p>
<p>In this model, $\eta_i$ represents the individual fixed effects and $\nu_{it}$ represents the random error terms. </p>
<p>Now consider an additive break in the <a href="https://www.aptech.com/blog/panel-data-basics-one-way-individual-effects/#the-fixed-effects-model-1" target="_blank" rel="noopener">fixed effects</a> at time $T_B$ such that  </p>
<p>$$y_{it} = \alpha y_{it-1} + (\eta_i + \delta_{iT_B}) + \nu_{it}, t \ge T_B$$</p>
<p>where</p>
<p>$$ E[\delta_{iT_B} \nu_{it}] \ne 0, t\lt T_B\\ E[\delta_{iT_B} \nu_{it}] = 0, t\ge T_B\\ E[\delta_{tT_B} \eta_i] \ne 0 $$ </p>
<p>Note that just as the fixed effects, $\eta_i$, are different across each individual the impact of the structural break, $\delta_{iT_B}$, on the fixed effects is different across the individuals. </p>
<h3 id="model-summary">Model Summary</h3>
<ul>
<li>Dynamic panel data model.</li>
<li>Individual specific structural break in fixed effects ($\delta_{tT_B}$).</li>
<li>$E[\delta_{tT_B} \eta_i] \ne 0$.</li>
</ul>
<h2 id="the-arellano-bond-estimation-method">The Arellano-Bond Estimation Method</h2>
<p>In static panel data models, like the one-way fixed effects model, demeaning or differencing is used to address heterogeneity. However, <a href="http://www.econ.uiuc.edu/~econ536/Papers/nickell81.pdf" target="_blank" rel="noopener">Nickell (1981)</a> showed that in dynamic panel data models this process creates a bias in the coefficient estimates.</p>
<p>To address this issue, lagged levels or differences of the dependent variable are used as instruments. The Dynamic Panel Data approach, popularized by <a href="http://people.stern.nyu.edu/wgreene/Econometrics/Arellano-Bond.pdf" target="_blank" rel="noopener">Arellano Bond (1991)</a>, uses a system of equations, one for each time period, and different instruments in each equation. </p>
<p>This is done to allow the use of newly available lagged variables as instruments as we move forward through the time series. From these new instruments, the Arellano-Bond moment conditions are formed.  </p>
<h3 id="arellano-bond-moment-conditions">Arellano-Bond Moment Conditions</h3>
<ul>
<li>Difference Estimator
$$E[y_{it-s}(\delta \nu_{it})] = 0\ \text{for}\ t = 3, 4, \ldots, T\ \text{and}\ 2 \le S \le t-1$$</li>
<li>Level Estimator
$$E [\Delta y_{it-s}(\nu_{it} + \eta_i)] = 0\ \text{for}\ t = 3, 4, \ldots, T\ \text{and}\ 2 \le S \le t-1$$</li>
</ul>
<h2 id="the-bias">The Bias</h2>
<h3 id="the-arellano-bond-difference-estimator">The Arellano-Bond Difference Estimator</h3>
<p>When structural breaks are present the Arellano-Bond moment conditions are no longer valid. To demonstrate, consider the difference equation moments when there is a structural break at $T_B = 3$, $t = 4$, and $s = 2$:</p>
<p>$$E[y_{it-s}(\Delta \nu_{it})]\\ = E[y_{i2}({y_{i4} - \alpha y_{i3} - \eta_i - \delta_{iT_B}} - {y_{i3} - \alpha y_{i2} - \eta_i})] \\ = E[y_{i2}\Delta y_{i4}] - \alpha E[y_{i2} \Delta y_{i2}] - \boxed{E[y_{i2} \delta_{iT_B}] }$$</p>
<p>Structural breaks introduce bias into the <a href="https://www.aptech.com/resources/tutorials/gmm/introduction/" target="_blank" rel="noopener">GMM</a> difference estimates through the boxed term in the equation above, $\boxed{E[y_{i2} \delta_{iT_B}] \ne 0}$. </p>
<h3 id="the-arellano-bond-level-estimator">The Arellano-Bond Level Estimator</h3>
<p>Now consider the level equation moments when there is a structural break at $T_B = 3$, $t = 4$, and $s = 2$:</p>
<p>$$E[\Delta y_{it-s}(\nu_{it} + \eta_i)] = [(y_{i3} - y_{i2})(\nu_{i4} + \eta_i)]\\ = E[(\alpha y_{it-2} + \delta_{iT_B} + \nu_{i3} + \eta_i - y_{it-2})(\nu_{i4} + \eta_i)]\\ = E[\big((\alpha - 1)y_{it-2} + \eta_i\big)\eta_i] + \boxed{E[\delta_{iT_B}\eta_i] }$$</p>
<p>In this case structural breaks introduce bias into the GMM level estimates through the boxed term, $\boxed{E[\delta_{iT_B}\eta_i] \ne 0}$.  </p>
<h2 id="the-double-d-gmm-estimator">The Double-D GMM Estimator</h2>
<p>Chowdhury and Russell (2018) propose the use of a new <em>Double-D</em> GMM estimator. The <em>Double-D</em> estimator uses lagged differences as instruments but correlates them with the lagged <em>differences</em> of the fixed effects such that the moments are given by $E[\Delta y_{i,t-s} \Delta \nu_{it}]$ where $S \ge 2$. </p>
<p>To see how this eliminates the bias consider the case where $t = 5$ and $S = 2$:</p>
<p>$$E[\Delta y_{it-2}(\Delta \nu_{it} + \Delta \eta_i)] = E[ \Delta y_{it-2} (\Delta \nu_{it})]\\ = E[(\alpha y_{it-2} + \delta_{iT_B} + \nu_{i3} + \eta_i - y_{it-2})(\Delta \nu_{i5})]\\ = E[\big((\alpha - 1)y_{i3} + \eta_i\big)(\Delta \nu_{i5})] + \boxed{E[(\delta_{iT_B} \Delta \nu_{i5})]}$$</p>
<p>Note that in the <em>Double-D</em> moment equation the boxed term $\boxed{E[(\delta_{iT_B} \Delta \nu_{i5})]}$ is equal to zero and the moments are valid. </p>
<h2 id="conclusion">Conclusion</h2>
<p>Structural breaks cannot be ignored, whether working with pure time series models or panel data models. When introduced into dynamic panel data models, structural breaks bias the Arellano-Bond moments, in turn biasing the coefficient estimates. Chowdhury and Russell (2018) propose a promising solution to this bias, the <em>Double-D</em> estimator.  </p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/panel-data-structural-breaks-and-unit-root-testing/" target="_blank" rel="noopener">Panel data, structural breaks and unit root testing</a></li>
<li><a href="https://www.aptech.com/blog/panel-data-basics-one-way-individual-effects/" target="_blank" rel="noopener">Panel Data Basics: One-way Individual Effects</a></li>
<li><a href="https://www.aptech.com/blog/how-to-aggregate-panel-data-in-gauss/" target="_blank" rel="noopener">How to Aggregate Panel Data in GAUSS</a></li>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-panel-data/" target="_blank" rel="noopener">Introduction to the Fundamentals of Panel Data</a></li>
<li><a href="https://www.aptech.com/blog/panel-data-stationarity-test-with-structural-breaks/" target="_blank" rel="noopener">Panel Data Stationarity Test With Structural Breaks</a></li>
<li><a href="https://www.aptech.com/blog/transforming-panel-data-to-long-form-in-gauss/" target="_blank" rel="noopener">Transforming Panel Data to Long Form in GAUSS</a></li>
</ol>
<h3 id="references">References</h3>
<p>Chowdhury, R. A., &amp; Russell, B. (2018). The difference, system and ‘Double‐D’GMM panel estimators in the presence of structural breaks. <em>Scottish Journal of Political Economy, 65</em>(3), 271-292.</p>
<p>Nickell, S. (1981). Biases in Dynamic Models with Fixed Effects. <em>Econometrica, 49</em>(6), 1417-1426. doi:10.2307/1911408</p>
<p></p>
    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/the-effects-of-structural-breaks-on-gmm-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
