<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Time Series &#8211; Aptech</title>
	<atom:link href="https://www.aptech.com/blog/category/time-series/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aptech.com</link>
	<description>GAUSS Software - Fastest Platform for Data Analytics</description>
	<lastBuildDate>Wed, 18 Jun 2025 18:17:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Announcing Time Series MT 4.0</title>
		<link>https://www.aptech.com/blog/announcing-time-series-mt-4-0/</link>
					<comments>https://www.aptech.com/blog/announcing-time-series-mt-4-0/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Wed, 18 Jun 2025 18:05:27 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11585609</guid>

					<description><![CDATA[We’re excited to share the official release of Time Series MT (TSMT) 4.0! 

This release provide a major upgrade to our GAUSS <a href="https://www.aptech.com/blog/getting-started-with-time-series-in-gauss/" target="_blank" rel="noopener">time series tools</a>. With over 40 new features, enhancements, and improvements, TSMT 4.0 significantly expanding the scope and usability of TSMT.
]]></description>
										<content:encoded><![CDATA[<p><a href="https://www.aptech.com/wp-content/uploads/2025/05/hd-sign-restrictions.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/hd-sign-restrictions.jpg" alt="Historical decompositions of unemployment using a sign restricted SVAR." width="1758" height="858" class="aligncenter size-full wp-image-11585540" /></a></p>
<h3 id="introduction">Introduction</h3>
<p>We’re excited to share the official release of Time Series MT (TSMT) 4.0! </p>
<p>This release provide a major upgrade to our GAUSS <a href="https://docs.aptech.com/gauss/tsmt/index.html" target="_blank" rel="noopener">time series tools</a>. With over <a href="https://docs.aptech.com/develop/gauss/tsmt/changelogtsmt.html" target="_blank" rel="noopener">40 new features, enhancements, and improvements,</1> TSMT 4.0 significantly expanding the scope and usability of TSMT.</p>
<h2 id="new-tools-for-structural-vector-autoregressive-svar-modeling">New Tools For Structural Vector Autoregressive (SVAR) Modeling</h2>
<p>With the TSMT 4.0 library, you can run <a href="https://www.aptech.com/blog/estimating-svar-models-with-gauss/" target="_blank" rel="noopener">SVAR models</a> out of the box, without complicated programming. Easy to use new features allow you to:</p>
<ul>
<li>Estimate reduced-form VAR parameters, impulse response functions (IRFs), and forecast error variance decompositions (FEVDs) with ease.</li>
<li>Apply built-in identification strategies like Cholesky decomposition, <a href="https://www.aptech.com/blog/sign-restricted-svar-in-gauss/" target="_blank" rel="noopener">sign restrictions</a>, and long-run restrictions.</li>
<li>Visualize results using new, streamlined functions for plotting IRFs and FEVDs.</li>
</ul>
<p>TSMT 4.0 makes complex SVAR analysis more accessible—without sacrificing analytical rigor.</p>
<div style="text-align:center;background-color:#f0f2f4"><hr>Ready to get started using TSMT 4.0? <a href="https://www.aptech.com/contact-us/">Contact us today!<hr></a></div>
<h2 id="sarima-modeling-now-smarter-and-more-flexible">SARIMA Modeling: Now Smarter and More Flexible</h2>
<p>SMT 4.0 delivers a complete overhaul of its <a href="https://www.aptech.com/blog/easier-arima-modeling-with-state-space-revisiting-inflation-modeling-using-tsmt-4-0/" target="_blank" rel="noopener">SARIMA modeling capabilities</a>, bringing you:</p>
<ul>
<li>Enhanced numerical stability and robust covariance estimation.</li>
<li>Intelligent enforcement of stationarity and invertibility conditions.</li>
<li>Simplified estimation with smart defaults and fewer required inputs.</li>
<li>Support for special cases like white noise and random walks, with or without drift.</li>
<li>Accurate standard error estimation via the delta method.</li>
</ul>
<p>These upgrades streamline SARIMA modeling and help ensure more reliable results across a wider range of model structures.</p>
<h2 id="more-insightful-model-diagnostics-and-reporting">More Insightful Model Diagnostics and Reporting</h2>
<pre>================================================================================
Model:                 ARIMA(1,1,1)          Dependent variable:             wpi
Time Span:              1960-01-01:          Valid cases:                    123
                        1990-10-01<br />
SSE:                         64.512          Degrees of freedom:             121
Log Likelihood:             369.791          RMSE:                         0.724
AIC:                        369.791          SEE:                          0.730
SBC:                       -729.958          Durbin-Watson:                1.876
R-squared:                    0.449          Rbar-squared:                 0.440
================================================================================
Coefficient                Estimate      Std. Err.        T-Ratio     Prob |&gt;| t
================================================================================

AR[1,1]                       0.883          0.063         13.965          0.000
MA[1,1]                       0.420          0.121          3.472          0.001
Constant                      0.081          0.730          0.111          0.911
================================================================================</pre>
<p>We’ve reimagined the output experience in TSMT 4.0, making it easier to interpret and compare model results:</p>
<ul>
<li>Output reports are now cleaner, clearer, and more informative.</li>
<li>Expanded diagnostics help you quickly evaluate model assumptions and performance.</li>
<li>Built-in summaries make it simple to assess multiple models side-by-side.</li>
</ul>
<p>With TSMT 4.0, you’ll spend less time deciphering output and more time drawing insights.</p>
<h2 id="seamless-integration-with-gauss-dataframes">Seamless Integration with GAUSS Dataframes</h2>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">library tsmt;

// Load dataframe
fname = getGAUSSHome("pkgs/tsmt/examples/var_enders_trans.gdat");
data = loadd(fname);

// Estimate the model
call varmaFit(data, "spread + d_lip_detrend + d4_unem", 3);</code></pre>
<p>TSMT 4.0 fully embraces the <a href="https://www.aptech.com/blog/what-is-a-gauss-dataframe-and-why-should-you-care/" target="_blank" rel="noopener">GAUSS dataframe ecosystem</a>, offering:</p>
<ul>
<li>Automatic recognition of variable names and time spans.</li>
<li>No manual reformatting required, just load your time series data and go.</li>
<li>Outputs that automatically interpret dates and provide human-readable labeling.</li>
</ul>
<p>This integration minimizes setup time and boosts productivity, especially when working with large or complex datasets.</p>
<h2 id="try-out-the-gauss-time-series-mt-4-0-library">Try Out The GAUSS Time Series MT 4.0 Library</h2>
[contact-form-7]
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/announcing-time-series-mt-4-0/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Easier ARIMA Modeling with State Space: Revisiting Inflation Modeling Using TSMT 4.0</title>
		<link>https://www.aptech.com/blog/easier-arima-modeling-with-state-space-revisiting-inflation-modeling-using-tsmt-4-0/</link>
					<comments>https://www.aptech.com/blog/easier-arima-modeling-with-state-space-revisiting-inflation-modeling-using-tsmt-4-0/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Tue, 03 Jun 2025 00:31:37 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11585561</guid>

					<description><![CDATA[Estimate ARIMA models in state space form using GAUSS. Learn how arimaSS simplifies modeling, automates forecasting, and supports lag selection.]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>State space models are a powerful tool for analyzing <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">time series data</a>, especially when you want to estimate unobserved components like trends or cycles. But traditionally, setting up these models—even for something as common as ARIMA—can be tedious.</p>
<p>The GAUSS <a href="https://docs.aptech.com/develop/gauss/tsmt/arimass.html" target="_blank" rel="noopener"><code>arimaSS</code></a> function, available in the <a href="https://www.aptech.com/blog/time-series-mt-4-0/" target="_blank" rel="noopener">Time Series MT 4.0 library</a>, lets you estimate state space ARIMA models without manually building the full state space structure. It’s a cleaner, faster, and more reliable way to work with ARIMA models.</p>
<p>In this post, we’ll revisit our <a href="https://www.aptech.com/blog/understanding-state-space-models-an-inflation-example/" target="_blank" rel="noopener">inflation modeling example</a> using updated data from the Federal Reserve Economic Data (FRED) database. Along the way, we’ll demonstrate how <code>arimaSS</code> works, how it simplifies the modeling process, and how easy it is to generate forecasts from your results.</p>
<h2 id="why-use-arimass-in-tsmt">Why use <code>arimaSS</code> in TSMT?</h2>
<p>In our earlier state-space inflation example, we manually set up the state space model. This process required a solid understanding of state space modeling, specifically:</p>
<ul>
<li>Setting up the system matrices.  </li>
<li>Initializing state vectors.  </li>
<li>Managing model dynamics.  </li>
<li>Specifying parameter starting values.  </li>
</ul>
<p>In comparison, the <code>arimaSS</code> function handles all of this setup automatically. It internally constructs the appropriate model structure and runs the Kalman filter using standard ARIMA specifications.</p>
<p>Overall, the <code>arimaSS</code> function provides:</p>
<ul>
<li><b>Simplified syntax</b>: No need to manually define matrices or system dynamics. This not only saves time but also reduces the chance of errors or model misspecification.  </li>
<li><b>More robust estimates</b>: Behind-the-scenes improvements, such as enhanced covariance computations and stationarity enforcement, lead to more accurate and stable parameter estimates.  </li>
<li><b>Compatibility with forecasting tools</b>: The <code>arimaSS</code> output structure integrates directly with TSMT tools for computing and plotting forecasts.</li>
</ul>
<h3 id="the-arimass-procedure">The <code>arimaSS</code> Procedure</h3>
<p>The <code>arimaSS</code> procedure has two required inputs:</p>
<ol>
<li>A time series dataset.</li>
<li>The AR order. </li>
</ol>
<p>It also allows four optional inputs for model customization:</p>
<ol>
<li>The order of differencing. </li>
<li>The moving average order. </li>
<li>An indicator controlling whether a constant is included in the model.</li>
<li>An indicator controlling whether a trend is included in the the model. </li>
</ol>
<h3 id="general-usage">General Usage</h3>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">aOut = arimaSS(y, p [, d, q, trend, const]);</code></pre>
<hr>
<dl>
<dt>Y</dt>
<dd>Tx1 or Tx2 time series data. May include date variable, which will be removed from the data matrix and is not included in the model as a regressor.</dd>
<dt>p</dt>
<dd>Scalar, the number of autoregressive lags included in the model.</dd>
<dt>d</dt>
<dd>Optional, scalar, the order of differencing. Default = 0.</dd>
<dt>q</dt>
<dd>Optional, scalar, the moving average order. Default = 0.</dd>
<dt>trend</dt>
<dd>Optional, scalar, an indicator variable to include a trend in the model. Set to 1 to include trend, 0 otherwise. Default = 0.</dd>
<dt>const</dt>
<dd>Optional, an indicator variable to include a constant in the model. Set to 1 to include constant, 0 otherwise. Default = 1.</dd>
</dl>
<hr>  
<p>All returns are stored in an <code>arimaOut</code> structure, including:</p>
<ul>
<li>Estimated parameters. </li>
<li>Model diagnostics and summary statistics. </li>
<li>Model description.</li>
</ul>
<p>The complete contents of the <code>arimaOut</code> structure include:</p>
<div style="max-height: 400px; overflow-y: auto; border: 1px solid #ccc; padding: 10px;">
  <table style="width: 100%; border-collapse: collapse;">
    <thead>
      <tr>
        <th style="text-align: left; padding: 8px; border-bottom: 1px solid #ddd;">Member</th>
        <th style="text-align: left; padding: 8px; border-bottom: 1px solid #ddd;">Description</th>
      </tr>
    </thead>
    <tbody>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.aic</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Akaike Information Criterion value.</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.b</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Estimated model coefficients (Kx1 vector).</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.e</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Residuals from the fitted model (Nx1 vector).</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.ll</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Log-likelihood value of the model.</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.sbc</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Schwarz Bayesian Criterion value.</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.lrs</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Likelihood Ratio Statistic vector (Lx1).</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.vcb</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Covariance matrix of estimated coefficients (KxK).</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.mse</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Mean squared error of the residuals.</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.sse</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Sum of squared errors.</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.ssy</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Total sum of squares of the dependent variable.</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.rstl</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Instance of <code>kalmanResult</code> structure containing Kalman filter results.</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.tsmtDesc</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Instance of <code>tsmtModelDesc</code> structure with model description details.</td>
      </tr>
      <tr>
        <td style="padding: 8px; border-bottom: 1px solid #eee;"><code>amo.sumStats</code></td>
        <td style="padding: 8px; border-bottom: 1px solid #eee;">Instance of <code>tsmtSummaryStats</code> structure containing summary statistics.</td>
      </tr>
    </tbody>
  </table>
</div>
<h2 id="example-modeling-inflation">Example: Modeling Inflation</h2>
<p>Today, we’ll use a simple, albeit naive, model of inflation. This model is based on a CPI inflation index created from the <a href="https://www.aptech.com/blog/understanding-state-space-models-an-inflation-example/" target="_blank" rel="noopener">FRED CPIAUCNS monthly dataset</a>. </p>
<p>To begin, we’ll load and prepare our data directly from the FRED database.</p>
<h3 id="loading-data-from-fred">Loading data from FRED</h3>
<p>Using the <a href="https://docs.aptech.com/develop/gauss/fred_load.html" target="_blank" rel="noopener"><code>fred_load</code></a> and <a href="https://docs.aptech.com/develop/gauss/fred_set.html" target="_blank" rel="noopener"><code>fred_set</code></a> procedures, we will:</p>
<ul>
<li>Pull the continuously compounded annual rate of change from FRED.  </li>
<li>Include data starting from January 1971 (1971m1).</li>
</ul>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set observation start date
fred_params = fred_set("observation_start", "1971-01-01");

// Specify units to be 
// continuous compounded annual 
// rate of change
fred_params = fred_set("units", "cca");

// Specify series to pull
series = "CPIAUCNS";

// Pull data from FRED
cpi_data = fred_load(series, fred_params);

// Preview data
head(cpi_data);</code></pre>
<p>This prints the first five observations:</p>
<pre>            date         CPIAUCNS
      1971-01-01        0.0000000
      1971-02-01        3.0112900
      1971-03-01        3.0037600
      1971-04-01        2.9962600
      1971-05-01        5.9701600 </pre>
<p>To further preview our data, let's create a quick plot of the inflation series using the <a href="https://docs.aptech.com/develop/gauss/plotxy.html" target="_blank" rel="noopener"><code>plotXY</code></a> procedure and a formula string:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">plotXY(cpi_data, "CPIAUCNS~date");</code></pre>
<p>For fun, let’s add a reference line to visualize the Fed’s long-run average inflation target of 2%:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Add inflation target line at 2%
plotAddHLine(2);</code></pre>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/06/raw-series.png"><img src="https://www.aptech.com/wp-content/uploads/2025/06/raw-series.png" alt="US CPI based inflation with inflation targeting line. " width="800" height="600" class="aligncenter size-full wp-image-11585573" /></a></p>
<p>As one final visualization, let's look at the 5 year (60 month) moving average line:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Compute moving average
ma_5yr = movingAve(cpi_data[., "CPIAUCNS"], 60);

// Add to time series plot
plotXY(cpi_data[., "date"], ma_5yr);

// Add inflation targetting line at 2%
plotAddHLine(2);</code></pre>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/06/moving-average.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/06/moving-average.jpg" alt="5 year moving average US CPI based inflation with inflation targeting line." width="800" height="600" class="size-full wp-image-11585574" /></a></p>
<p>The moving average plot highlights long-term trends, filtering out short-term fluctuations and noise: </p>
<ol>
<li><b>The Disinflation Era: (app. 1980-1993):</b> This period is marked by the steep decline in inflation from the double-digit highs of the early 1980s to around 3% by the early 1990s, an outcome of aggressive monetary policy by the Federal Reserve.</li>
<li><b>The ‘Great Moderation’ (mid-1990s- mid-2000s):</b> Inflation remained relatively stable and low, hovering near the Fed's 2% target, marked here with a horizontal line for reference.</li>
<li><b>Post-GFC stagnation (2008-2020):</b> After the 2008 Global Financial Crisis, inflation trended even lower, with the 5-year average dipping below 2% for an extended period, reflecting sluggish demand and persistent slack.</li>
<li><b>Recent surge:</b> The sharp rise beginning around 2021 reflects the post-pandemic spike in inflation, pushing the 5-year average above 3% for the first time in over a decade.</li>
</ol>
<p>We’ll make one final transformation before estimation by converting the &quot;CPIAUCNS&quot; values from percentages to decimals.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">cpi_data[., "CPIAUCNS"] = cpi_data[., "CPIAUCNS"]/100;</code></pre>
<div class="alert alert-info" role="alert">Note: The <code>fred_load</code> procedure requires a valid API key. To download data directly from FRED into GAUSS, you must obtain an API key from <a href="https://fred.stlouisfed.org/docs/api/api_key.html" target="_blank" rel="noopener">FRED</a> and set it in GAUSS.For more details on importing data from FRED, see our earlier blog post, <a href="https://www.aptech.com/blog/importing-fred-data-to-gauss/" target="_blank" rel="noopener">Importing FRED Data to GAUSS</a>.</div>
<h3 id="arima-estimation">ARIMA Estimation</h3>
<p>Now that we’ve loaded our data, we’re ready to estimate our model using <code>arimaSS</code>. We’ll start with a simple AR(2) model. Based on the earlier visualization, it’s reasonable to include a constant but exclude a trend, so we’ll use the default settings for those options.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">call arimaSS(cpi_data, 2);</code></pre>
<p>There are a few helpful things to note about this:</p>
<ol>
<li>We did not need to remove the date vector from <em>cpi_data</em> before passing it to <code>arimaSS</code>. Most <strong>TSMT</strong> functions allow you to include a date vector with your time series. In fact, this is recommended, GAUSS will automatically detect and use the date vector to generate more informative results reports.</li>
<li>In this example, we are not storing the output. Instead, we are printing it directly to the screen using the <code>call</code> keyword.</li>
<li>Because this is strictly an AR model and we’re using the default deterministic components, we only need two inputs: the data and the AR order.</li>
</ol>
<p>A detailed results report is printed to screen:</p>
<pre>================================================================================
Model:                 ARIMA(2,0,0)          Dependent variable:        CPIAUCNS
Time Span:              1971-01-01:          Valid cases:                    652
                        2025-04-01<br />
SSE:                          0.839          Degrees of freedom:             648
Log Likelihood:           -1244.565          RMSE:                         0.036
AIC:                      -2497.130          SEE:                          0.210
SBC:                      -2463.210          Durbin-Watson:                1.999
R-squared:                    0.358          Rbar-squared:                 0.839
================================================================================
Coefficient                Estimate      Std. Err.        T-Ratio     Prob |&gt;| t
--------------------------------------------------------------------------------

Constant                    0.03832        0.00349       10.97118        0.00000
CPIAUCNS L(1)               0.59599        0.03715       16.04180        0.00000
CPIAUCNS L(2)               0.00287        0.03291        0.08726        0.93046
Sigma2 CPIAUCNS             0.00129        0.00007       18.05493        0.00000
================================================================================</pre>
<p>There are some interesting observations from our results:</p>
<ol>
<li>The estimated constant is statistically significant and equal to 0.038 (3.8%). This is higher than the Fed’s long-run inflation target of 2%, but not by much. It’s also important to note that our dataset begins well before the era of formal Fed inflation targeting.</li>
<li>All coefficients are statistically significant <strong>except</strong> for the <code>CPIAUCNS L(2)</code> coefficient.</li>
<li>The table header includes the timespan of our data. This was automatically detected because we included a date vector with our input. If no date vector is included, the timespan will be reported as <code>unknown</code>.</li>
</ol>
<h3 id="extra-credit-looping-for-model-selection">Extra credit: Looping For Model Selection</h3>
<p>The <code>arimaSS</code> procedure doesn’t currently provide built-in optimal lag selection. However, we can write a simple <code>for</code> loop and use an array of structures to identify the best lag length.</p>
<p>Our goal is to select the model with the lowest AIC, allowing for a maximum of 6 lags.</p>
<p>Two tools will help us with this task:</p>
<ol>
<li>An array of structures to store the results from each model.  </li>
<li>A vector to store the AIC values from each model.</li>
</ol>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set maximum lags
maxlags = 6;

// Declare a single array
struct arimamtOut amo;

// Reshape to create structure array
amo = reshape(amo, maxlags, 1);

// AIC storage vector
aic_vector = zeros(maxlags, 1);</code></pre>
<p>Next, we’ll loop through our models. In each iteration, we will:</p>
<ol>
<li>Store the results in a separate <code>arimamtOut</code> structure.  </li>
<li>Extract the AIC and store it in our AIC vector.  </li>
<li>Adjust the sample size so that each lag selection iteration uses the same number of observations.</li>
</ol>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Loop through lag possibilities
for i(1, maxlags, 1);
    // Trim data to enforce sample
    // size consistency 
    y_i = trimr(cpi_data, maxlags-i, 0);

    // Estimate the current 
    // AR(i) model
    amo[i] = arimaSS(y_i, i);

    // Store AIC for easy comparison
    aic_vector[i] = amo[i].aic;
endfor;</code></pre>
<p>Finally, we will use the <a href="https://docs.aptech.com/develop/gauss/minindc.html" target="_blank" rel="noopener"><code>minindc</code></a> procedure to find the index of the minimum AIC:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Optimal lag is equal to location
// of minimum AIC
opt_lag = minindc(aic_vector);

// Print optimal lags
print "Optimal lags:"; opt_lag;

// Select the final output structure
struct arimamtOut amo_final;
amo_final = amo[opt_lag];</code></pre>
<p>The optimal lags based on the minimum AIC is 8, yielding the following results:</p>
<pre>================================================================================
Model:                 ARIMA(8,0,0)          Dependent variable:        CPIAUCNS
Time Span:              1971-01-01:          Valid cases:                    652
                        2025-04-01<br />
SSE:                          0.803          Degrees of freedom:             642
Log Likelihood:           -1258.991          RMSE:                         0.035
AIC:                      -2537.982          SEE:                          0.080
SBC:                      -2453.182          Durbin-Watson:                1.998
R-squared:                    0.385          Rbar-squared:                 0.939
================================================================================
Coefficient                Estimate      Std. Err.        T-Ratio     Prob |&gt;| t
--------------------------------------------------------------------------------

Constant                    0.03824        0.00512        7.46526        0.00000
CPIAUCNS L(1)               0.58055        0.03917       14.82047        0.00000
CPIAUCNS L(2)              -0.03968        0.04730       -0.83883        0.40156
CPIAUCNS L(3)              -0.01156        0.05062       -0.22833        0.81939
CPIAUCNS L(4)               0.09288        0.04151        2.23749        0.02525
CPIAUCNS L(5)               0.02322        0.04773        0.48639        0.62669
CPIAUCNS L(6)              -0.06863        0.04505       -1.52333        0.12767
CPIAUCNS L(7)               0.16048        0.04038        3.97391        0.00007
CPIAUCNS L(8)              -0.00313        0.02778       -0.11281        0.91018
Sigma2 CPIAUCNS             0.00123        0.00007       18.05512        0.00000
================================================================================</pre>
<div class="alert alert-info" role="alert">It is worth noting that only the coefficients for the 1st, 4th, and 7th lags are statistically significant. This suggests that a model including only those lags may be more appropriate.</div>
<h2 id="conclusion">Conclusion</h2>
<p>The <code>arimaSS</code> function offers a streamlined approach to estimating ARIMA models in state space form, eliminating the need for manual specification of system matrices and initial values. This makes it easier to explore models, experiment with lag structures, and generate forecasts, especially for users who may not be deeply familiar with state space modeling.</p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">Introduction to the Fundamentals of Time Series Data and Analysis</a>  </li>
<li><a href="https://www.aptech.com/blog/category/time-series/" target="_blank" rel="noopener">Importing FRED Data to GAUSS</a>  </li>
<li><a href="https://www.aptech.com/blog/the-intuition-behind-impulse-response-functions-and-forecast-error-variance-decomposition/" target="_blank" rel="noopener">Understanding State-Space Models (An Inflation Example)</a>  </li>
<li><a href="https://www.aptech.com/blog/getting-started-with-time-series-in-gauss/" target="_blank" rel="noopener">Getting Started with Time Series in GAUSS</a><br />
    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></li>
</ol>

<div style="text-align:center;background-color:#455560;color:#FFFFFF">
<hr>
<div class="lp-cta">
    <a href="https://www.aptech.com/contact-us/" class="btn btn-primary">Order TSMT Today!</a>
</div><hr>
</div>


[/markdown]
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/easier-arima-modeling-with-state-space-revisiting-inflation-modeling-using-tsmt-4-0/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Sign Restricted SVAR in GAUSS</title>
		<link>https://www.aptech.com/blog/sign-restricted-svar-in-gauss/</link>
					<comments>https://www.aptech.com/blog/sign-restricted-svar-in-gauss/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Tue, 20 May 2025 14:47:33 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11585501</guid>

					<description><![CDATA[In structural vector autoregressive (SVAR) modeling, one of the core challenges is identifying the structural shocks that drive the system's dynamics.  
<br>
Traditional identification approaches often rely on short-run or long-run restrictions, which require strong theoretical assumptions about contemporaneous relationships or long-term behavior.  
<br>
Sign restriction identification provides greater flexibility by allowing economists to specify only the direction, positive, negative, or neutral, of variable responses to shocks, based on theory.  
<br>
In this blog, we’ll show you how to implement sign restriction identification using the new GAUSS procedure, **svarFit**, introduced in <a href="https://www.aptech.com/blog/time-series-mt-4-0/" target="_blank" rel="noopener">TSMT 4.0</a>.  ]]></description>
										<content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>In structural vector autoregressive (SVAR) modeling, one of the core challenges is identifying the structural shocks that drive the system's dynamics.  </p>
<p>Traditional identification approaches often rely on short-run or long-run restrictions, which require strong theoretical assumptions about contemporaneous relationships or long-term behavior.  </p>
<p>Sign restriction identification provides greater flexibility by allowing economists to specify only the direction, positive, negative, or neutral, of variable responses to shocks, based on theory.  </p>
<p>In this blog, we’ll show you how to implement sign restriction identification using the new GAUSS procedure, <strong>svarFit</strong>, introduced in <a href="https://www.aptech.com/blog/time-series-mt-4-0/" target="_blank" rel="noopener">TSMT 4.0</a>.  </p>
<p>We’ll walk through how to:  </p>
<ul>
<li>Specify sign restrictions.  </li>
<li>Estimate the SVAR model.  </li>
<li>Interpret the resulting impulse response functions (IRFs). </li>
</ul>
<p>By the end of this guide, you’ll have a solid understanding of how to apply sign restrictions to uncover meaningful economic relationships.  </p>
<h2 id="what-are-sign-restrictions">What are Sign Restrictions?</h2>
<p>Sign restrictions are a method of identifying structural shocks in SVAR models by specifying the expected direction of response of endogenous variables.  </p>
<p>Sign restrictions:</p>
<ul>
<li>Do not impose exact constraints on parameter values or long-term impacts; they only require that impulse responses move in a particular direction for a specified period.  </li>
<li>Are flexible and less reliant on strict parametric assumptions than other identification methods.  </li>
<li>Rely on qualitative economic insights, making them less prone to model specification errors.  </li>
</ul>
<p>For example, in a <a href="https://www.aptech.com/blog/the-structural-var-model-at-work-analyzing-monetary-policy/" target="_blank" rel="noopener">monetary policy</a> shock, economic theory might suggest that an increase in interest rates should lead to a decline in output and inflation in the short run. An SVAR sign restriction identification approach would enforce these directional movements.  </p>
<div class="alert alert-info" role="alert">If you're looking to brush up on the theoretical aspects of VAR and SVAR models, see our previous blogs for an introduction:  <br><ol style="margin-top: 1px; margin-bottom: 1px; line-height: 1.0;">  <br><li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-vector-autoregressive-models/" target="_blank" rel="noopener">&quot;Introduction to the Fundamentals of Vector Autoregressive Models&quot;</a>.</li>  <br><li><a href="https://www.aptech.com/blog/understanding-and-solving-the-structural-vector-autoregressive-identification-problem/" target="_blank" rel="noopener">&quot;Understanding and Solving the Structural Vector Autoregressive Identification Problem&quot;</a>.</li>  <br></ol>  </div>
<h2 id="estimating-svar-models-in-gauss">Estimating SVAR Models in GAUSS</h2>
<p>The <a href="https://docs.aptech.com/gauss/tsmt/svarfit.html" target="_blank" rel="noopener">svarFit</a> procedure, available in TSMT 4.0, offers an all-in-one tool for:  </p>
<ul>
<li>Estimating reduced-form parameters of VAR models.  </li>
<li>Implementing structural identification.  </li>
<li>Deriving impulse response functions (IRFs), forecast error variance decompositions (FEVDs), and historical decompositions (HDs).  </li>
</ul>
<p>While the procedure provides intuitive defaults for quick and easy estimation, it also offers the flexibility to fully customize your model.  </p>
<p>For a detailed, step-by-step walkthrough of the estimation process, refer to my previous blog post:<br />
<a href="https://www.aptech.com/blog/estimating-svar-models-with-gauss/" target="_blank" rel="noopener">Estimating SVAR Models with GAUSS</a>.<br />
That post offers guidance on setting up the model, estimating reduced-form parameters, and performing structural identification.  </p>
<div style="text-align:center;background-color:#f0f2f4"><hr><a href="https://www.aptech.com/contact-us/" target="_blank" rel="noopener"> Get Started with TSMT today!<hr></a></div>
<h3 id="implementing-sign-restrictions-with-svarfit">Implementing Sign Restrictions with <code>svarFit</code></h3>
<p>The <code>svarFit</code> procedure allows you to specify sign restrictions as a structural identification method. This is done in three primary steps:  </p>
<ol>
<li>Set the identification method to sign restrictions.  </li>
<li>Define the sign restriction matrix.  </li>
<li>Specify the shock variables and impacted horizons.  </li>
</ol>
<h2 id="example-sign-restricted-responses-to-supply-demand-and-monetary-policy-shocks">Example: Sign Restricted Responses to Supply, Demand, and Monetary Policy Shocks</h2>
<p>Let's explore an empirical example capturing the dynamic relationships between inflation, unemployment, and the federal funds rate.  </p>
<p>We’ll impose economically meaningful sign restrictions to identify three key shocks:  </p>
<table border="1" cellspacing="0" cellpadding="5">  
  <thead style="background-color: #f2f2f2;">  
    <tr>  
      <th>Shock Type</th>  
      <th>Inflation</th>  
      <th>Unemployment</th>  
      <th>Federal Funds Rate</th>  
    </tr>  
  </thead>  
  <tbody>  
    <tr>  
      <td><strong>Supply Shock</strong></td>  
      <td>-</td>  
      <td>-</td>  
      <td>-</td>  
    </tr>  
    <tr>  
      <td><strong>Demand Shock</strong></td>  
      <td>+</td>  
      <td>-</td>  
      <td>+</td>  
    </tr>  
    <tr>  
      <td><strong>Monetary Policy Shock</strong></td>  
      <td>-</td>  
      <td>+</td>  
      <td>+</td>  
    </tr>  
  </tbody>  
</table>
<p>These restrictions allow us to apply economic theory to untangle the underlying structural drivers behind observed movements in the data.  </p>
<h3 id="step-one-loading-our-data">Step One: Loading Our Data</h3>
<p>The first step in our model is to load the data from the <em>data_narsignrestrict.dta</em> file. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Data import
*/
fname = "data_narsignrestrict.dta";
data_shortrun = loadd(fname);</code></pre>
<h3 id="step-two-specifying-the-var-model">Step Two: Specifying the VAR Model</h3>
<p>In this example, we will estimate a SVAR(2) model which includes three endogenous variables and a constant:</p>
<p>$$\begin{aligned} \ln\text{inflat}_t = c_1 &+ a_{11} \ln\text{inflat}_{t-1} + a_{12} \ln\text{fedfunds}_{t-1} + a_{13} \ln\text{unempl}_{t-1} \\ &+ a_{14} \ln\text{inflat}_{t-2} + a_{15} \ln\text{fedfunds}_{t-2} + a_{16} \ln\text{unempl}_{t-2} \\ &+ \gamma_1 t + u_{1t} \\ \ln\text{fedfunds}_t = c_2 &+ a_{21} \ln\text{inflat}_{t-1} + a_{22} \ln\text{fedfunds}_{t-1} + a_{23} \ln\text{unempl}_{t-1} \\ &+ a_{24} \ln\text{inflat}_{t-2} + a_{25} \ln\text{fedfunds}_{t-2} + a_{26} \ln\text{unempl}_{t-2} \\ &+ \gamma_2 t + u_{2t} \\ \ln\text{unempl}_t = c_3 &+ a_{31} \ln\text{inflat}_{t-1} + a_{32} \ln\text{fedfunds}_{t-1} + a_{33} \ln\text{unempl}_{t-1} \\ &+ a_{34} \ln\text{inflat}_{t-2} + a_{35} \ln\text{fedfunds}_{t-2} + a_{36} \ln\text{unempl}_{t-2} \\ &+ \gamma_3 t + u_{3t} \\ \end{aligned}$$                       </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Specifying the model
*/
// Three endogenous variable
// No exogenous variables  
formula = "lninflat + lnunempl + lnfedfunds";

// Specify number of lags
lags = 2;

// Include constant
const = 1;</code></pre>
<h3 id="step-three-set-up-sign-restrictions">Step Three: Set up Sign Restrictions</h3>
<p>To set up sign restrictions we need to:</p>
<ol>
<li>Specify sign restrictions as the identification method using the <em>ident</em> input. </li>
<li>Set up the sign restriction matrix using the <em>irf.signRestrictions</em> member of the <code>svarControl</code> structure.</li>
<li>Define the restricted shock variables and the restriction horizon using the <em>irf.restrictedShock</em> and <em>irf.restrictionHorizon</em> members of the <code>svarControl</code> structure.</li>
</ol>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Sign restriction setup
*/
// Specify identication method
ident = "sign";

// Declare controls structure
// Fill with defaults
struct svarControl Sctl;
Sctl = svarControlCreate();

// Specify to use sign restrictions
Sctl.irf.ident = "sign";

// Specify which shock variable is restricted
Sctl.irf.restrictedShock = { 1, 2, 3 };

// Set up restrictions horizon
Sctl.irf.restrictionHorizon = { 1, 1, 1 };

// Set up restrictions matrix
// A row for each shock, and a column for each variable
//             lninflat     lnunempl     lnfedfunds
// shock           
// supply          -           -             -
// demand          +           -             +
// monetary        -           +             +
Sctl.irf.signRestrictions = { -1  1 -1,
                               1 -1  1,
                              -1  1  1 };</code></pre>
<h3 id="step-four-estimate-model">Step Four: Estimate Model</h3>
<p>Finally, we estimate our model using <code>svarFit</code>.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Estimate VAR model
*/
struct svarOut sOut;
sOut = svarFit(data_shortrun, formula, ident, const, lags, Sctl);</code></pre>
<p>Calling the <code>svarFit</code> procedure loads the <code>svarOut</code> structure with results and automatically prints results to the screen.</p>
<div style="max-height: 500px; overflow-y: auto; border: 1px solid #ccc; padding: 10px; background-color: #f9f9f9;">
<pre>
=====================================================================================================
Model:                      SVAR(2)                               Number of Eqs.:                   3
Time Span:              1960-01-01:                               Valid cases:                    162
                        2000-10-01                                                                   
Log Likelihood:             406.137                               AIC:                        -13.305
                                                                  SBC:                        -12.962
=====================================================================================================
Equation                             R-sq                  DW                 SSE                RMSE

lninflat                          0.76855             2.10548            17.06367             0.33180 
lnunempl                          0.97934             4.92336             0.21507             0.03725 
lnfedfunds                        0.94903             2.30751             1.80772             0.10799 
=====================================================================================================
Results for reduced form equation lninflat
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

             Constant             0.06817             0.20780             0.32804             0.74332 
        lninflat L(1)             0.59712             0.07736             7.71851             0.00000 
        lnunempl L(1)            -1.14092             0.67732            -1.68448             0.09410 
      lnfedfunds L(1)             0.30207             0.25870             1.16765             0.24474 
        lninflat L(2)             0.25045             0.08002             3.12976             0.00209 
        lnunempl L(2)             1.05780             0.65416             1.61703             0.10790 
      lnfedfunds L(2)            -0.16005             0.26135            -0.61237             0.54119 
=====================================================================================================
Results for reduced form equation lnunempl
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

             Constant             0.01819             0.02333             0.77975             0.43673 
        lninflat L(1)             0.01173             0.00869             1.35062             0.17878 
        lnunempl L(1)             1.55876             0.07604            20.49928             0.00000 
      lnfedfunds L(1)             0.01946             0.02904             0.66991             0.50391 
        lninflat L(2)            -0.00899             0.00898            -1.00024             0.31875 
        lnunempl L(2)            -0.59684             0.07344            -8.12681             0.00000 
      lnfedfunds L(2)             0.00563             0.02934             0.19193             0.84805 
=====================================================================================================
Results for reduced form equation lnfedfunds
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

             Constant             0.16038             0.06764             2.37124             0.01896 
        lninflat L(1)             0.02722             0.02518             1.08115             0.28131 
        lnunempl L(1)            -1.14540             0.22046            -5.19558             0.00000 
      lnfedfunds L(1)             1.03509             0.08420            12.29300             0.00000 
        lninflat L(2)             0.04302             0.02605             1.65183             0.10059 
        lnunempl L(2)             1.09553             0.21292             5.14528             0.00000 
      lnfedfunds L(2)            -0.12063             0.08507            -1.41801             0.15820 
=====================================================================================================
</pre>
</div>
<h3 id="step-five-visualize-dynamics">Step Five: Visualize Dynamics</h3>
<p>Once our model is estimated, we can gain insight into the system's dynamics by plotting:</p>
<ol>
<li>Impulse response functions. </li>
<li>Forecast error variance decompositions. </li>
</ol>
<p>First, let's look at the responses to a demand shock (<em>lnunempl</em>):</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Visualizing dynamics
*/
// Plot IRFs of `lnunempl` shock 
plotIRF(sOut, "lnunempl", 1);

// Plot FEVDs of `lnunempl` shock
plotFEVD(sOut, "lnunempl", 1);</code></pre>
<p>The <code>plotIRF</code> procedure generates a grid plot of IRF to a shock :
<a href="https://www.aptech.com/wp-content/uploads/2025/05/irfs-sign-restricted.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/irfs-sign-restricted.jpg" alt="Impulse response functions to a demand shock using sign restricted SVAR." width="879" height="429" class="size-full wp-image-11585538" /></a> </p>
<p>The <code>plotFEVD</code> procedure generates an area plot of the FEVD:
<a href="https://www.aptech.com/wp-content/uploads/2025/05/fevd-sign-restrictions.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/fevd-sign-restrictions.jpg" alt="Factor error variance decompositions following a demand shock using sign restricted SVAR." width="879" height="429" class="aligncenter size-full wp-image-11585539" /></a></p>
<h4 id="what-do-we-see-in-the-irf-and-fevd-plots">What Do We See in the IRF and FEVD Plots?</h4>
<p>The dynamic responses to a demand shock in <em>lnunempl</em> provide useful insights into how the system behaves over time. Below, we highlight key observations from the forecast error variance decompositions (FEVDs) and impulse response functions (IRFs).  </p>
<h4 id="forecast-error-variance-decomposition-fevd">Forecast Error Variance Decomposition (FEVD)</h4>
<p>The FEVD plot shows the contribution of each variable to the forecast variance of <em>lnunempl</em> over time:  </p>
<ul>
<li>In the short run (periods 0–2), <em>lnunempl</em> itself accounts for most of the variation.  </li>
<li>As the forecast horizon increases, the role of <code>lninflat</code> grows, eventually contributing around 40% of the variation.  </li>
<li>The largest and most persistent contribution comes from <code>lnfedfunds</code>, which stabilizes above 45%, highlighting its long-term influence on unemployment dynamics.  </li>
<li>The share of <em>lnunempl</em> decreases steadily, dropping below 20% in later periods—suggesting that external variables explain more of the variation over time.  </li>
</ul>
<h4 id="impulse-response-functions-irfs">Impulse Response Functions (IRFs)</h4>
<p>The IRFs to a shock in <em>lnunempl</em> display the dynamic responses of each variable in the system:  </p>
<ul>
<li><strong> <em>lninflat</em> </strong> responds positively with a hump-shaped profile. It peaks around period 4–5 before gradually returning to baseline.  </li>
<li><strong> <em>lnunempl</em> </strong> initially declines but then reverses and increases slightly, indicating a short-run drop followed by a modest rebound.  </li>
<li><strong> <em>lnfedfunds</em> </strong> responds sharply with a peak around period 4, suggesting a monetary tightening reaction. The response tapers off over time but remains positive.  </li>
</ul>
<p>These dynamics are consistent with a demand-driven shock: falling unemployment puts upward pressure on inflation and triggers an increase in interest rates.</p>
<h3 id="step-six-analyze-historical-decomposition">Step Six: Analyze Historical Decomposition</h3>
<p>Next, we'll examine the historical decomposition of the <em>lnunempl</em> variable. Historical decompositions allow us to break down the observed movements in a variable over time into contributions from each structural shock identified in the model.  </p>
<p>This provides valuable insight into which shocks were most influential during specific periods and helps explain how demand, supply, and monetary policy shocks have shaped the path of unemployment.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Plot HDs for `lnunempl` 
plotHD(sOut, "lnunempl", 1);</code></pre>
<p>The <code>plotHD</code> procedure generates a time-series bar plot of the HD:
<a href="https://www.aptech.com/wp-content/uploads/2025/05/hd-sign-restrictions.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/hd-sign-restrictions.jpg" alt="Historical decompositions of unemployment using a sign restricted SVAR. " width="1758" height="858" class="aligncenter size-full wp-image-11585540" /></a></p>
<h4 id="what-we-see-in-the-hd-plot">What We See in the HD Plot?</h4>
<p>The HD plot shows the time-varying contributions of each structural shock to fluctuations in <em>lnunempl</em>:</p>
<ul>
<li>
<p><strong>Inflation shocks</strong> (<em>lninflat</em>) explain a significant share of unemployment increases in the middle portion of the sample. Their contribution is mostly positive during that period, suggesting inflationary pressure played a role in raising unemployment.</p>
</li>
<li>
<p><strong>Unemployment shocks</strong> (<em>lnunempl</em>) dominate early and late periods of the sample. These are likely capturing idiosyncratic or residual variation not explained by the other two shocks.</p>
</li>
<li><strong>Federal funds rate shocks</strong> (<em>lnfedfunds</em>) play a more modest but noticeable role during downturns. Their influence is generally negative, suggesting that monetary tightening helped reduce unemployment volatility in those windows.</li>
</ul>
<p>Overall, the decomposition illustrates that no single shock dominates throughout the entire sample. Different drivers shape the evolution of unemployment depending on the macroeconomic context.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Today's blog demonstrates how sign restriction identification in SVAR models can provide meaningful insights into the structural dynamics behind key macroeconomic variables.  </p>
<p>Using economically motivated sign restrictions, we were able:</p>
<ul>
<li>Uncover and interpret the dynamic responses to different shocks.</li>
<li>Visualize the relative importance of each shock over time. </li>
<li>Trace the evolving drivers of unemployment through historical decomposition.  </li>
</ul>
<p>These findings show how SVAR models, when combined with flexible identification strategies like sign restrictions, offer a powerful framework for modeling complex macroeconomic interactions.</p>
<div class="alert alert-info" role="alert">You can find the code and data for today's blog <a href="https://github.com/aptech/gauss_blog/tree/master/time_series/svar-sign-restrictions" target="_blank" rel="noopener">here</a>.</div>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">Introduction to the Fundamentals of Time Series Data and Analysis</a>  </li>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-vector-autoregressive-models/" target="_blank" rel="noopener">Introduction to the Fundamentals of Vector Autoregressive Models</a>  </li>
<li><a href="https://www.aptech.com/blog/the-intuition-behind-impulse-response-functions-and-forecast-error-variance-decomposition/" target="_blank" rel="noopener">The Intuition Behind Impulse Response Functions and Forecast Error Variance Decomposition</a>  </li>
<li><a href="https://www.aptech.com/blog/introduction-to-granger-causality/" target="_blank" rel="noopener">Introduction to Granger Causality</a>  </li>
<li><a href="https://www.aptech.com/blog/understanding-and-solving-the-structural-vector-autoregressive-identification-problem/" target="_blank" rel="noopener">Understanding and Solving the Structural Vector Autoregressive Identification Problem</a>  </li>
<li><a href="https://www.aptech.com/blog/the-structural-var-model-at-work-analyzing-monetary-policy/" target="_blank" rel="noopener">The Structural VAR Model at Work: Analyzing Monetary Policy</a>
    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></li>
</ol>
<h2 id="try-out-gauss-tsmt-4-0">Try Out GAUSS TSMT 4.0</h2>
[contact-form-7]

]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/sign-restricted-svar-in-gauss/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Estimating SVAR Models With GAUSS</title>
		<link>https://www.aptech.com/blog/estimating-svar-models-with-gauss/</link>
					<comments>https://www.aptech.com/blog/estimating-svar-models-with-gauss/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 09 May 2025 18:10:23 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11585447</guid>

					<description><![CDATA[Structural Vector Autoregressive (SVAR) models provide a structured approach to modeling dynamics and understanding the relationships between multiple time series variables. Their ability to capture complex interactions among multiple endogenous variables makes SVAR models fundamental tools in economics and finance. However, traditional software for estimating SVAR models has often been complicated, making analysis difficult to perform and interpret. In today's blog, we present a step-by-step guide to using the new GAUSS procedure, svarFit, introduced in TSMT 4.0. We will cover: Estimating reduced form models. Structural identification using short-run restrictions. Structural identification using long-run restrictions. Structural identification using sign restrictions.]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>Structural Vector Autoregressive (SVAR) models provide a structured approach to modeling dynamics and understanding the relationships between multiple time series variables. Their ability to capture complex interactions among multiple endogenous variables makes SVAR models fundamental tools in economics and finance. However, traditional software for estimating SVAR models has often been complicated, making analysis difficult to perform and interpret.   </p>
<p>In today's blog, we present a step-by-step guide to using the new GAUSS procedure, <a href="https://docs.aptech.com/gauss/tsmt/svarfit.html" target="_blank" rel="noopener">svarFit</a>, introduced in <a href="https://www.aptech.com/blog/time-series-mt-4-0/" target="_blank" rel="noopener">TSMT 4.0</a>.</p>
<h2 id="understanding-svar-models">Understanding SVAR Models</h2>
<p>A Structural Vector Autoregression (SVAR) model extends the basic Vector Autoregression (VAR) model by incorporating economic theory through restrictions that help identify structural shocks. This added structure allows analysts to understand how unexpected changes (shocks) in one variable impact others within the system over time. </p>
<h3 id="reduced-form-vs-structural-form">Reduced Form vs. Structural Form</h3>
<ul>
<li><b>Reduced Form:</b> Represents observable relationships without assumptions about the underlying economic structure. This form is purely data-driven and descriptive.</li>
<li><b>Structural Form:</b> Applies economic theory through restrictions, enabling the identification of structural shocks. This form provides deeper insights into causal relationships.</li>
</ul>
<table>
    <thead>
         <tr>
         <th colspan="3"><h3 id="types-of-restrictions">Types of Restrictions</h3>
         </th>
        </tr>
        <tr>
            <th>Restriction</th>
            <th>Description</th>
            <th>Example</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><b>Short-run Restrictions</b></td>
            <td>Assume certain immediate relationships between variables.</td>
            <td>A monetary policy shock affects interest rates instantly but impacts inflation with a delay.</td>
        </tr>
        <tr>
            <td><b>Long-run Restrictions</b></td>
            <td>Impose conditions on the variables' behavior in the long term.</td>
            <td>Monetary policy does not have a long-term effect on real GDP.</td>
        </tr>
        <tr>
            <td><b>Sign Restrictions</b></td>
            <td>Constrain the direction of variables' responses to shocks.</td>
            <td>A positive supply shock decreases inflation and increases output.</td>
        </tr>
    </tbody>
</table>
<div class="alert alert-info" role="alert">If you're looking for a more in-depth introduction to VAR models and SVAR, see our previous blogs:<br><ol style="margin-top: 1px; margin-bottom: 1px; line-height: 1.0;"><br><li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-vector-autoregressive-models/" target="_blank" rel="noopener">&quot;Introduction to the Fundamentals of Vector Autoregressive Models&quot;</a>.</li><br><li><a href="https://www.aptech.com/blog/understanding-and-solving-the-structural-vector-autoregressive-identification-problem/" target="_blank" rel="noopener">&quot;Understanding and Solving the Structural Vector Autoregressive Identification Problem&quot;</a>.</li><br></ol></div>
<h2 id="the-svarfit-procedure">The <code>svarFit</code> Procedure</h2>
<p>The <code>svarFit</code> procedure is an all-in-one tool for estimating SVAR models. It provides a streamlined approach to specifying, estimating, and analyzing SVAR models in GAUSS. With <code>svarFit</code>, you can:</p>
<ol>
<li>Estimate the reduced form VAR model.</li>
<li>Apply short-run, long-run, or sign restrictions to identify structural shocks.</li>
<li>Analyze dynamics through Impulse Response Functions (IRF), Forecast Error Variance Decomposition (FEVD), and Historical Decompositions (HD). </li>
<li>Bootstrap confidence intervals to make statistical inferences with greater reliability.</li>
</ol>
<h3 id="general-usage">General Usage</h3>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">sOut = svarFit(data, formula [, ident, const, lags, ctl])
sOut = svarFit(Y [, X_exog, ident, const, lags, ctl])</code></pre>
<hr>
<dl>
<dt>data</dt>
<dd>String or dataframe, filename or dataframe to be used with formula string.</dd>
<dt>formula</dt>
<dd>String, model formula string.</dd>
<dt>Y</dt>
<dd>TxM or Tx(M+1) time series data. May include date variable, which will be removed from the data matrix and is not included in the model as a regressor.</dd>
<dt>X_exog</dt>
<dd>Optional, matrix or dataframe, exogenous variables. If specified, the model is estimated as a VARX model. The exogenous variables are assumed to be stationary and are included in the model as additional regressors. May include a date variable, which will be removed from the data matrix and is not included in the model as a regressor.</dd>
<dt>ident</dt>
<dd>Optional, string, the identification method. Options include: <code>"oir"</code> = zero short-run restrictions, <code>"bq"</code> = zero long-run restrictions, <code>"sign"</code> = sign restrictions.</dd>
<dt>const</dt>
<dd>Optional, scalar, specifying deterministic components of model. <code>0</code> = No constant or trend, <code>1</code> = Constant, <code>2</code> = Constant and trend. Default = 1.</dd>
<dt>lags</dt>
<dd>Optional, scalar, number of lags to include in  VAR model. If not specified, optimal lags will be computed using the information criterion specified in <em>ctl.ic</em>.</dd>
<dt>ctl</dt>
<dd>Optional, an instance of the <code>svarControl</code> structure used for setting advanced controls for estimation.
<hr>  </dd>
</dl>
<h3 id="specifying-the-model">Specifying the Model</h3>
<p>The <code>svarFit</code> is fully compatible with GAUSS dataframes, allowing for intuitive model specification using formula strings. This makes it easy to set up and estimate VAR models directly from your data.</p>
<p>For example, suppose we want to model the relationship between GDP Growth Rate (GR_GDP) and Inflation Rate (IR) over time. A VAR(2) model with two lags can be represented mathematically as follows:</p>
<p>$$\begin{aligned} GR\_GDP_t = c_1 &+ a_{11} GR\_GDP_{t-1} + a_{12} IR_{t-1} \\ &+ a_{13} GR\_GDP_{t-2} + a_{14} IR_{t-2} + u_{1t} \\ IR_t = c_2 &+ a_{21} GR\_GDP_{t-1} + a_{22} IR_{t-1} \\ &+ a_{23} GR\_GDP_{t-2} + a_{24} IR_{t-2} + u_{2t} \end{aligned}$$</p>
<p>Assume that our data is already loaded into a <a href="https://www.aptech.com/blog/what-is-a-gauss-dataframe-and-why-should-you-care/" target="_blank" rel="noopener">GAUSS dataframe</a>, <em>econ_data</em>. This model can be directly specified for estimation using a <a href="https://www.aptech.com/resources/tutorials/formula-string-syntax/" target="_blank" rel="noopener">formula string</a>: </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Estimate SVAR model 
call svarFit(econ_data, "GR_GDP + IR");</code></pre>
<p>Now, let's extend our model by including an exogenous variable, interest rate (INT), to this model. Our extended VAR(2) model equations are updated as follows:</p>
<p>$$\begin{aligned} GR\_GDP_t = c_1 &+ a_{11} GR\_GDP_{t-1} + a_{12} IR_{t-1} + a_{13} GR\_GDP_{t-2} + a_{14} IR_{t-2} \\ &+ b_1 INT_t + u_{1t} \\ IR_t = c_2 &+ a_{21} GR\_GDP_{t-1} + a_{22} IR_{t-1} + a_{23} GR\_GDP_{t-2} + a_{24} IR_{t-2} \\ &+ b_2 INT_t + u_{2t} \end{aligned}$$</p>
<p>To include this exogenous variable in our model specification, we simply update the formula string using the <code>"~"</code> symbol: </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Estimate model 
call svarFit(econ_data, "GR_GDP + IR ~ INT");</code></pre>
<div class="alert alert-info" role="alert">The <code>svarFit</code> procedure also accepts data matrices as an alternative to using formula strings. </div>
<h2 id="storing-results-with-svarout">Storing Results with <code>svarOut</code></h2>
<p>When we estimate SVAR models using <code>svarFit</code>, the results are stored in an <code>svarOut</code> structure. This structure is designed for intuitive access to key outputs, such as model coefficients, residuals, IRFs, and more. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Declare output structure
struct svarOut sOut;

// Estimate model
sOut = svarFit(econ_data, "GR_GDP + IR ~ INT");</code></pre>
<p>Beyond storing results, the <code>svarOut</code> structure is used for many post-estimation functions, such as <a href="https://docs.aptech.com/gauss/tsmt/plotirf.html" target="_blank" rel="noopener">plotIRF</a>, <a href="https://docs.aptech.com/gauss/tsmt/plotfevd.html" target="_blank" rel="noopener">plotFEVD</a> and <a href="https://docs.aptech.com/gauss/tsmt/plothd.html" target="_blank" rel="noopener">plotHD</a>. </p>
<table>
    <thead>
        <tr>
         <th colspan="3"><h3 id="key-members-of-svarout">Key Members of svarOut</h3>
         </th>
        </tr><tr>
            <th>Component</th>
            <th>Description</th>
            <th>Example Usage</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><b>sOut.coefficients</b></td>
            <td>Estimated coefficients of the model.</td>
            <td><code>print sOut.coefficients;</code></td>
        </tr>
        <tr>
            <td><b>sOut.residuals</b></td>
            <td>Residuals of the VAR equations, representing the portion not explained by the model.</td>
            <td><code>print sOut.residuals;</code></td>
        </tr>
        <tr>
            <td><b>sOut.yhat</b></td>
            <td>In-sample predicted values of the dependent variables.</td>
            <td><code>print sOut.yhat;</code></td>
        </tr>
        <tr>
            <td><b>sOut.sigma</b></td>
            <td>Covariance matrix of the residuals.</td>
            <td><code>print sOut.sigma;</code></td>
        </tr>
        <tr>
            <td><b>sOut.irf</b></td>
            <td>Impulse Response Functions (IRFs) for analyzing the effects of shocks over time.</td>
            <td><code>plotIRF(sOut.irf);</code></td>
        </tr>
        <tr>
            <td><b>sOut.fevd</b></td>
            <td>Forecast Error Variance Decomposition (FEVD) to evaluate the contribution of each shock to forecast errors.</td>
            <td><code>print sOut.fevd;</code></td>
        </tr>
        <tr>
            <td><b>sOut.HD</b></td>
            <td>Historical Decompositions to analyze historical contributions of shocks.</td>
            <td><code>print sOut.HD;</code></td>
        </tr>
        <tr>
            <td><b>sOut.aic</b>, <b>sOut.sbc</b></td>
            <td>Model selection criteria: Akaike Information Criterion (AIC) and Schwarz Bayesian Criterion (SBC).</td>
            <td><code>print sOut.aic;</code></td>
        </tr>
    </tbody>
</table>
<div style="text-align:center;background-color:#f0f2f4"><hr><a href="https://www.aptech.com/contact-us/" target="_blank" rel="noopener"> Order Time Series MT 4.0 today!<hr></a></div>
<h2 id="example-one-applying-short-run-restrictions">Example One: Applying Short Run Restrictions</h2>
<p>As a first example, let's start with the default behavior of <code>svarFit</code>, which is to estimate Short-Run Restrictions. </p>
<p>Short-Run Restrictions:</p>
<ul>
<li>Assume that certain relationships between variables are instantaneous. </li>
<li>Are useful for modeling the immediate impacts of economic shocks, such as changes in interest rates or policy decisions.</li>
<li>Rely on a lower triangular matrix (Cholesky decomposition), which implies that variable ordering matters.</li>
</ul>
<div class="alert alert-info" role="alert">For a more technical explanation of short-run restrictions, see the detailed explanation <a href="https://www.aptech.com/blog/understanding-and-solving-the-structural-vector-autoregressive-identification-problem/#zero-short-run-restrictions-cholesky-identification" target="_blank" rel="noopener">here</a>. </div>
<h3 id="loading-our-data">Loading Our Data</h3>
<p>In this example, we will apply short-run restrictions to a VAR model with three endogenous variables: Inflation (<code>Inflat</code>), Unemployment (<code>Unempl</code>), and the Federal Funds Rate (<code>Fedfund</code>). </p>
<p>First, we load the dataset from the file <code>"data_shortrun.dta"</code> and specify our formula string:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Load data
*/
fname = "data_shortrun.dta";
data_shortrun = loadd(fname);

// Specify model formula string 
// Three endogenous variable
// No exogenous variables 
formula = "Inflat + Unempl + Fedfunds";</code></pre>
<p>In this case the order of the variables in the formula string implies: </p>
<ul>
<li><em>Inflat</em> affects <em>Unempl</em> and <em>Fedfunds</em> contemporaneously.</li>
<li><em>Unempl</em> affects <em>Fedfunds</em> but not <em>Inflat</em> contemporaneously.</li>
<li><em>Fedfunds</em> does not affect the other variables contemporaneously.</li>
</ul>
<h3 id="estimating-default-model">Estimating Default Model</h3>
<p>If we want to use model defaults, this is all we need to setup prior to estimation. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Declare output structure
// for storing results
struct svarOut sOut;

// Estimate model with defaults
sOut = svarFit(data_shortrun, formula);</code></pre>
<p>The <code>svarFit</code> procedure prints the reduced-form estimates:</p>
<div style="max-height: 500px; overflow-y: auto; border: 1px solid #ccc; padding: 10px; background-color: #f9f9f9;">
<pre>
=====================================================================================================
Model:                      SVAR(6)                               Number of Eqs.:                   3
Time Span:              1960-01-01:                               Valid cases:                    158
                        2000-10-01                                                                   
Log Likelihood:            -344.893                               AIC:                         -3.464
                                                                  SBC:                         -2.418
=====================================================================================================
Equation                             R-sq                  DW                 SSE                RMSE

Inflat                            0.86474             1.93244           129.75134             0.96616 
Unempl                            0.98083             7.89061             7.05807             0.22534 
Fedfunds                          0.93764             2.81940            97.09873             0.83579 
=====================================================================================================
Results for reduced form equation Inflat
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

             Constant             0.78598             0.39276             2.00116             0.04732 
          Inflat L(1)             0.61478             0.08430             7.29320             0.00000 
          Unempl L(1)            -1.20719             0.40464            -2.98335             0.00337 
        Fedfunds L(1)             0.12674             0.10292             1.23142             0.22024 
          Inflat L(2)             0.08949             0.09798             0.91339             0.36262 
          Unempl L(2)             2.17171             0.66854             3.24845             0.00146 
        Fedfunds L(2)            -0.05198             0.13968            -0.37216             0.71034 
          Inflat L(3)             0.04730             0.09946             0.47556             0.63514 
          Unempl L(3)            -1.01991             0.70890            -1.43872             0.15248 
        Fedfunds L(3)             0.02764             0.14328             0.19292             0.84731 
          Inflat L(4)             0.18545             0.09767             1.89877             0.05967 
          Unempl L(4)            -0.95056             0.70881            -1.34106             0.18209 
        Fedfunds L(4)            -0.11887             0.14160            -0.83945             0.40266 
          Inflat L(5)            -0.07630             0.09902            -0.77052             0.44230 
          Unempl L(5)             1.07985             0.68944             1.56628             0.11956 
        Fedfunds L(5)             0.14800             0.13465             1.09912             0.27361 
          Inflat L(6)             0.14879             0.08763             1.69800             0.09174 
          Unempl L(6)            -0.17321             0.38210            -0.45330             0.65104 
        Fedfunds L(6)            -0.16674             0.10030            -1.66238             0.09869 
=====================================================================================================
Results for reduced form equation Unempl
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

             Constant             0.05439             0.09160             0.59376             0.55364 
          Inflat L(1)             0.04011             0.01966             2.03992             0.04325 
          Unempl L(1)             1.47354             0.09438            15.61362             0.00000 
        Fedfunds L(1)            -0.00510             0.02400            -0.21231             0.83218 
          Inflat L(2)            -0.02196             0.02285            -0.96086             0.33829 
          Unempl L(2)            -0.52754             0.15592            -3.38329             0.00093 
        Fedfunds L(2)             0.06812             0.03258             2.09107             0.03834 
          Inflat L(3)             0.00214             0.02320             0.09211             0.92674 
          Unempl L(3)             0.10859             0.16534             0.65680             0.51239 
        Fedfunds L(3)            -0.04923             0.03342            -1.47314             0.14297 
          Inflat L(4)            -0.02574             0.02278            -1.12973             0.26053 
          Unempl L(4)            -0.32361             0.16532            -1.95752             0.05229 
        Fedfunds L(4)             0.03248             0.03303             0.98338             0.32713 
          Inflat L(5)             0.02071             0.02309             0.89691             0.37132 
          Unempl L(5)             0.36505             0.16080             2.27026             0.02473 
        Fedfunds L(5)            -0.01161             0.03141            -0.36975             0.71213 
          Inflat L(6)            -0.00669             0.02044            -0.32745             0.74382 
          Unempl L(6)            -0.14897             0.08912            -1.67160             0.09685 
        Fedfunds L(6)            -0.00212             0.02339            -0.09070             0.92786 
=====================================================================================================
Results for reduced form equation Fedfunds
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

             Constant             0.28877             0.33977             0.84990             0.39684 
          Inflat L(1)             0.05831             0.07292             0.79960             0.42530 
          Unempl L(1)            -1.93356             0.35004            -5.52374             0.00000 
        Fedfunds L(1)             0.93246             0.08903            10.47324             0.00000 
          Inflat L(2)             0.22166             0.08476             2.61524             0.00990 
          Unempl L(2)             2.17717             0.57833             3.76457             0.00025 
        Fedfunds L(2)            -0.37931             0.12083            -3.13915             0.00207 
          Inflat L(3)            -0.08237             0.08604            -0.95729             0.34008 
          Unempl L(3)            -0.96474             0.61325            -1.57317             0.11795 
        Fedfunds L(3)             0.53848             0.12395             4.34438             0.00003 
          Inflat L(4)            -0.00264             0.08449            -0.03123             0.97513 
          Unempl L(4)             1.41077             0.61317             2.30078             0.02289 
        Fedfunds L(4)            -0.14852             0.12249            -1.21246             0.22739 
          Inflat L(5)            -0.15941             0.08566            -1.86101             0.06486 
          Unempl L(5)            -0.74153             0.59641            -1.24333             0.21584 
        Fedfunds L(5)             0.34789             0.11648             2.98663             0.00333 
          Inflat L(6)             0.09898             0.07580             1.30579             0.19378 
          Unempl L(6)             0.01450             0.33055             0.04387             0.96507 
        Fedfunds L(6)            -0.38014             0.08677            -4.38099             0.00002 
=====================================================================================================
</pre>
</div>
<p>The reported reduced-form results include:</p>
<ul>
<li>The date range identified in the dataframe, <em>data_shortrun</em>. </li>
<li>The model estimated, based on the selected optimal number of lags, in this case SVAR(6).</li>
<li>Model diagnostics including R-squared (R-sq), the Durbin-Watson statistic (DW), Sum of the Squared Errors (SSE), and Root Mean Squared Errors (RMSE), by equation.  </li>
<li>Parameter estimates, printed separately for each equation. </li>
</ul>
<h3 id="customizing-our-model">Customizing Our Model</h3>
<p>The default model is a good start but suppose we want to make the following customizations:</p>
<ul>
<li>Include two exogenous variables, <em>trend</em> and <em>trendsq</em>. </li>
<li>Exclude a constant. </li>
<li>Estimate a VAR(2) model. </li>
<li>Change the IRF/FEVD horizon from 20 to 40.</li>
<li>Change the IRF/FEVD confidence level from 95% to 68%</li>
</ul>
<table>
    <thead>
        <tr>
         <th colspan="3"><h3 id="implementing-model-customizations">Implementing Model Customizations</h3>
         </th>
        </tr><tr>
            <th>Customization</th>
            <th>Tool</th>
            <th>Example</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><b>Adding exogenous variables.</b></td>
            <td>Adding a <pre>"~"</pre> and RHS variables to our formula string.</td>
            <td><code>formula = "Inflat + Unempl + Fedfunds ~ date + trend + trendsq";</code></td>
        </tr>
         <tr>
            <td><b>Specify identification method.</b></td>
            <td>Set our optional *ident* input to "oir".</td>
            <td><code>ident = "oir";</code></td>
        </tr>
          <tr>
            <td><b>Exclude a constant.</b></td>
            <td>Set our optional *constant* input to 0.</td>
            <td><code>const = 0;</code></td>
        </tr>
        <tr>
            <td><b>Estimate a VAR(2) model.</b></td>
            <td>Set the optional *lags* input.</td>
            <td><code>lags = 2;</code></td>
        </tr>
        <tr>
            <td><b>Change the IRF/FEVD horizon.</b></td>
            <td>Update the <i>irf.nsteps</i> member of the <pre>svarControl</pre> structure.</td>
            <td><code>sCtl.irf.nsteps = 40;</code></td>
        </tr>
        <tr>
            <td><b>Change the IRF/FEVD confidence level.</b></td>
            <td>Update the <i>irf.cl</i> member of the <pre>svarControl</pre> structure.</td>
            <td><code>sCtl.irf.cl = 0.68;</code></td>
        </tr>
    </tbody>
</table>
<p>Putting everything together:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Load library
new;
library tsmt;

/*
** Load data
*/
fname = "data_shortrun.dta";
data_shortrun = loadd(fname);

// Specify model formula string 
// Three endogenous variable
// Two exogenous variables  
formula = "Inflat + Unempl + Fedfunds ~ trend + trendsq";

// Identification method
ident = "oir";

// Estimate VAR(2)
lags = 2;

// Constant off
const = 0;

// Declare control structure
// and fill with defaults
struct svarControl sCtl;
sCtl = svarControlCreate();

// Update IRF/FEVD settings
sCtl.irf.nsteps = 40;
sCtl.irf.cl = 0.68;

/*
** Estimate VAR model
*/
struct svarOut sOut2;
sOut2 = svarFit(data_shortrun, formula, ident, const, lags, sCtl);</code></pre>
<div style="max-height: 500px; overflow-y: auto; border: 1px solid #ccc; padding: 10px; background-color: #f9f9f9;">
<pre>
=====================================================================================================
Model:                      SVAR(2)                               Number of Eqs.:                   3
Time Span:              1960-01-01:                               Valid cases:                    162
                        2000-10-01                                                                   
Log Likelihood:            -413.627                               AIC:                         -3.185
                                                                  SBC:                         -2.842
=====================================================================================================
Equation                             R-sq                  DW                 SSE                RMSE

Inflat                            0.83877             1.78639           159.81843             1.01872 
Unempl                            0.97835             5.82503             8.01756             0.22817 
Fedfunds                          0.91719             2.20585           135.51524             0.93807 
=====================================================================================================
Results for reduced form equation Inflat
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

          Inflat L(1)             0.65368             0.07951             8.22173             0.00000 
          Unempl L(1)            -0.36875             0.34207            -1.07799             0.28272 
        Fedfunds L(1)             0.19093             0.09600             1.98894             0.04848 
          Inflat L(2)             0.17424             0.08324             2.09308             0.03798 
          Unempl L(2)             0.30882             0.33838             0.91265             0.36285 
        Fedfunds L(2)            -0.16561             0.09995            -1.65695             0.09956 
                trend             0.03084             0.01278             2.41268             0.01701 
              trendsq            -0.00019             0.00008            -2.55370             0.01163 
=====================================================================================================
Results for reduced form equation Unempl
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

          Inflat L(1)             0.04566             0.01781             2.56408             0.01130 
          Unempl L(1)             1.48522             0.07662            19.38488             0.00000 
        Fedfunds L(1)             0.01387             0.02150             0.64508             0.51983 
          Inflat L(2)            -0.02556             0.01864            -1.37111             0.17234 
          Unempl L(2)            -0.51248             0.07579            -6.76186             0.00000 
        Fedfunds L(2)             0.02509             0.02239             1.12095             0.26406 
                trend            -0.00587             0.00286            -2.05169             0.04189 
              trendsq             0.00003             0.00002             1.99972             0.04729 
=====================================================================================================
Results for reduced form equation Fedfunds
=====================================================================================================
          Coefficient            Estimate           Std. Err.             T-Ratio          Prob |&gt;| t
-----------------------------------------------------------------------------------------------------

          Inflat L(1)             0.00902             0.07321             0.12316             0.90214 
          Unempl L(1)            -1.28526             0.31499            -4.08026             0.00007 
        Fedfunds L(1)             0.93532             0.08840            10.58097             0.00000 
          Inflat L(2)             0.19137             0.07665             2.49660             0.01359 
          Unempl L(2)             1.25710             0.31159             4.03445             0.00009 
        Fedfunds L(2)            -0.05845             0.09204            -0.63513             0.52629 
                trend             0.00195             0.01177             0.16561             0.86868 
              trendsq             0.00000             0.00007             0.03606             0.97128 
=====================================================================================================
</pre>
</div>
<h3 id="visualizing-dynamics">Visualizing dynamics</h3>
<p>The TSMT 4.0 library also includes a set of tools for quickly plotting dynamic shock responses after SVAR estimation. These functions take a filled <code>svarOut</code> structure and generate pre-formatted plots of IRFs, FEVDs, or HDs.</p>
<table>
    <thead>
        <tr>
            <th>Function</th>
            <th>Description</th>
            <th>Example Usage</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><b>plotIRF</b></td>
            <td>
                Plots the <b>Impulse Response Functions (IRFs)</b> for the specified shock variables over time. 
                IRFs illustrate how each variable responds to a shock in another variable.
            </td>
            <td>
                <code>plotIRF(sOut, "Inflat");</code>
            </td>
        </tr>
        <tr>
            <td><b>plotFEVD</b></td>
            <td>
                Visualizes the <b>Forecast Error Variance Decomposition (FEVD)</b>, which shows the contribution of each shock to the forecast error variance of each variable.
            </td>
            <td>
                <code>plotFEVD(sOut);</code>
            </td>
        </tr>
        <tr>
            <td><b>plotHD</b></td>
            <td>
                Plots the <b>Historical Decompositions (HD)
            </b></td><td>
                <code>plotHD(sOut);</code>
            </td>
        </tr>
    </tbody>
</table>
<p>Let's plot the IRFs, FEVDs, and HDs in response to a shock to <em>Inflat</em> from our customized model:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Specify shock variable
shk_var = "Inflat";

// Plot IRFs
plotIRF(sout, shk_var);

// Plot FEVDs
plotFEVD(sout, shk_var);

// Plot HDs
plotHD(sout, shk_var);</code></pre>
<p>This generates a grid plot of IRFs:</p>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/05/irfs-sr-restrictions-1.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/irfs-sr-restrictions-1.jpg" alt="Impulse response functions in after shock to inflation using VAR(2) model. " width="815" height="343" class="aligncenter size-full wp-image-11585483" /></a></p>
<p>An area plot of the FEVDs:</p>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/05/fevd-sr-restrictions.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/fevd-sr-restrictions.jpg" alt="Forecast error variance decompositions in response to inflation shock. " width="815" height="343" class="aligncenter size-full wp-image-11585481" /></a></p>
<p>And a bar plot of the HDs:</p>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/05/hd-sr-restrictions-1.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/hd-sr-restrictions-1.jpg" alt="" width="667" height="778" class="size-full wp-image-11585534" /></a></p>
<h2 id="example-two-applying-long-run-restrictions">Example Two: Applying Long Run Restrictions</h2>
<p>Long-run restrictions are often used in macroeconomic analysis to reflect theoretical assumptions about how certain shocks affect the economy over time. In this example, we follow the Blanchard-Quah (1989) approach to impose a long-run restriction that shocks to Unemployment do not affect GDP Growth in the long run.</p>
<h3 id="setting-up-the-model">Setting Up the Model</h3>
<p>First we load our long-run dataset, <em>data_longrun.dta</em>, specify the model formula string and turn the constant off. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Load the dataset
fname = "data_longrun.dta";
data_longrun = loadd(fname);

// Specify the model formula with two endogenous variables
formula = "GDPGrowth + Unemployment";

// Set lags to missing to use optimal lags
lags = miss();

// Constant off
const = 0;</code></pre>
<p>To change the identification method, we use the optional <em>ident</em> input. There are three possible settings for identification, &quot;oir&quot;, &quot;bq&quot;, and &quot;sign&quot;.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Use BQ identification
ident = "bq";</code></pre>
<p>Next we declare an instance of the <code>svarControl</code> structure and specify our irf settings.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Declare the control structure
struct svarControl sCtl;
sCtl = svarControlCreate();

// Set irf Cl
sctl.irf.cl = 0.68;

// Expand horizon
sctl.irf.nsteps = 40;</code></pre>
<p>Finally, we estimate our model and plot the dynamic responses. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Estimate the SVAR model with long-run restrictions
struct svarOut sOut;
sOut = svarFit(data_longrun, formula, ident, const, lags, sCtl);

// Specify shock variable
shk_var = "GDPGrowth";

// Plot IRFs
plotIRF(sOut, shk_var);

// Plot FEVDs
plotFEVD(sOut, shk_var);

// Plot HDs
plotHD(sOut, shk_var);</code></pre>
<p>This generates a grid plot of IRFs:</p>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/05/irf-lr-restrictions.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/irf-lr-restrictions.jpg" alt="Impulse response functions after long-run restrictions. " width="879" height="429" class="aligncenter size-full wp-image-11585491" /></a></p>
<p>An area plot of the FEVDs:</p>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/05/fevd-lr-restrictions.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/fevd-lr-restrictions.jpg" alt="Forecast error vactor decompositions with long-run restrictions. " width="879" height="429" class="aligncenter size-full wp-image-11585492" /></a></p>
<p>And a bar plot of the HDs:</p>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/05/hd-lr-restrictions.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/05/hd-lr-restrictions.jpg" alt="Historic decompositions using long-run restrictions. " width="879" height="429" class="aligncenter size-full wp-image-11585493" /></a></p>
<h2 id="conclusion">Conclusion</h2>
<p>The <code>svarFit</code> procedure, introduced in TSMT 4.0, makes it much easier to estimate and analyze SVAR models in GAUSS. In this post, we walked through how to apply both short-run and long-run restrictions to understand the structural dynamics between variables.</p>
<p>With just a few lines of code, you can estimate the model, specify identification restrictions, and visualize the results. This flexibility allows you to tailor your analysis to different economic theories without getting bogged down in complex setups.</p>
<p>You can find the code and data for today's blog <a href="https://github.com/aptech/gauss_blog/tree/master/time_series/svar-jamel" target="_blank" rel="noopener">here</a>.</p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">Introduction to the Fundamentals of Time Series Data and Analysis</a>  </li>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-vector-autoregressive-models/" target="_blank" rel="noopener">Introduction to the Fundamentals of Vector Autoregressive Models</a>  </li>
<li><a href="https://www.aptech.com/blog/the-intuition-behind-impulse-response-functions-and-forecast-error-variance-decomposition/" target="_blank" rel="noopener">The Intuition Behind Impulse Response Functions and Forecast Error Variance Decomposition</a>  </li>
<li><a href="https://www.aptech.com/blog/introduction-to-granger-causality/" target="_blank" rel="noopener">Introduction to Granger Causality</a>  </li>
<li><a href="https://www.aptech.com/blog/understanding-and-solving-the-structural-vector-autoregressive-identification-problem/" target="_blank" rel="noopener">Understanding and Solving the Structural Vector Autoregressive Identification Problem</a>  </li>
<li><a href="https://www.aptech.com/blog/the-structural-var-model-at-work-analyzing-monetary-policy/" target="_blank" rel="noopener">The Structural VAR Model at Work: Analyzing Monetary Policy</a> </li>
<li><a href="https://www.aptech.com/blog/sign-restricted-svar-in-gauss/" target="_blank" rel="noopener">Sign Restricted SVAR in GAUSS</a></li>
</ol>
<div class="alert alert-info" role="alert">Thank you <a href="https://www.jamelsaadaoui.com/" target="_blank" rel="noopener">Jamel Saadaoui</a> for the blog suggestion and publicly provided data. </div>
<p>    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script></p>
<h2 id="try-out-gauss-tsmt-4-0">Try Out GAUSS TSMT 4.0</h2>
[contact-form-7]

]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/estimating-svar-models-with-gauss/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Time Series MT 4.0</title>
		<link>https://www.aptech.com/blog/time-series-mt-4-0/</link>
					<comments>https://www.aptech.com/blog/time-series-mt-4-0/#respond</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Wed, 30 Apr 2025 17:18:07 +0000</pubDate>
				<category><![CDATA[Releases]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11584760</guid>

					<description><![CDATA[With over 40 new features, enhancements, and bug fixes, <a href="https://docs.aptech.com/gauss/tsmt/index.html" target="_blank" rel="noopener">Time Series MT (TSMT) 4.0</a> is s one of our most significant updates yet.

Highlights of the new release include:
<ul>
<li>Structural VAR (SVAR) Tools.</li>
<li>Enhanced SARIMA Modeling.</li>
<li>Extended Model Diagnostics and Reporting.</li>
<li>Seamless <a href="https://www.aptech.com/blog/what-is-a-gauss-dataframe-and-why-should-you-care/" target="_blank" rel="noopener">Dataframe</a> Integration.</li>
</ul>]]></description>
										<content:encoded><![CDATA[<p><a href="https://www.aptech.com/wp-content/uploads/2021/04/irf-var-blog-small.jpeg"><img src="https://www.aptech.com/wp-content/uploads/2021/04/irf-var-blog-small.jpeg" alt="Impulse response functions after VAR estimation." width="1200" height="393" class="aligncenter size-full wp-image-11581203" /></a></p>
<h2 id="introduction">Introduction</h2>
<p>With over 40 new features, enhancements, and bug fixes, <a href="https://docs.aptech.com/gauss/tsmt/index.html" target="_blank" rel="noopener">Time Series MT (TSMT) 4.0</a> is s one of our most significant updates yet.</p>
<p>Highlights of the new release include:</p>
<ul>
<li>Structural VAR (SVAR) Tools.</li>
<li>Enhanced SARIMA Modeling.</li>
<li>Extended Model Diagnostics and Reporting.</li>
<li>Seamless <a href="https://www.aptech.com/blog/what-is-a-gauss-dataframe-and-why-should-you-care/" target="_blank" rel="noopener">Dataframe</a> Integration.</li>
</ul>
<h2 id="new-svar-tools">New SVAR Tools</h2>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Declare control structure
// and fill with defaults
struct svarControl ctl;
ctl = svarControlCreate();

ctl.irf.ident = "long";

// Set maximum number of lags
maxlags = 8;

//  Turn constant on
const = 1;

// Check structural VAR model
call svarFit(Y, maxlags, const, ctl);</code></pre>
<p>TSMT 4.0 includes a new comprehensive suite of no-hassle functions for intuitively estimating SVAR models. </p>
<ul>
<li>Effortlessly estimate reduced-form parameters, impulse response functions (IRFs), and forecast error variance decompositions (FEVDs) using <a href="https://docs.aptech.com/gauss/tsmt/svarfit.html" target="_blank" rel="noopener">svarFit</a>.</li>
<li>Take advantage of built-in identification strategies, including Cholesky decomposition, sign restrictions, and long-run restrictions.</li>
<li>Use new functions for cleanly plotting IRFs and FEVDs. </li>
</ul>
<h2 id="enhanced-sarima-modeling">Enhanced SARIMA Modeling</h2>
<p>Significant upgrades to the <a href="https://docs.aptech.com/gauss/tsmt/arimass.html" target="_blank" rel="noopener">SARIMA state space framework</a> deliver improved numerical stability, more accurate covariance estimation, and rigorous enforcement of stationarity and invertibility conditions.</p>
<p><a href="https://www.aptech.com/wp-content/uploads/2025/04/arima-forecast.jpg"><img src="https://www.aptech.com/wp-content/uploads/2025/04/arima-forecast.jpg" alt="" width="800" height="306" class="aligncenter size-full wp-image-11585292" /></a></p>
<p>Key enhancements include:</p>
<ul>
<li><b>Simplified Estimations:</b> Optional arguments with smart defaults streamline model setup and estimation.</li>
<li><b>Broader Model Support:</b> Support now includes white noise and random walk models with optional constants and drift terms.</li>
<li><b>Enhanced Accuracy:</b> Standard errors are now computed using the delta method, explicitly accounting for constraints that enforce stationarity and invertibility.</li>
</ul>
<div style="text-align:center;background-color:#f0f2f4"><hr><a href="https://www.aptech.com/request-demo/" target="_blank" rel="noopener"> Time Series MT 4.0, now available!<hr></a></div>
<h2 id="extended-model-diagnostics-and-reporting">Extended Model Diagnostics and Reporting</h2>
<pre>================================================================================
Model:                 ARIMA(1,1,1)          Dependent variable:             wpi
Time Span:              1960-01-01:          Valid cases:                    123
                        1990-10-01<br />
SSE:                         64.512          Degrees of freedom:             121
Log Likelihood:             369.791          RMSE:                         0.724
AIC:                        369.791          SEE:                          0.730
SBC:                       -729.958          Durbin-Watson:                1.876
R-squared:                    0.449          Rbar-squared:                 0.440
================================================================================
Coefficient                Estimate      Std. Err.        T-Ratio     Prob |&gt;| t
================================================================================

AR[1,1]                       0.883          0.063         13.965          0.000
MA[1,1]                       0.420          0.121          3.472          0.001
Constant                      0.081          0.730          0.111          0.911
================================================================================</pre>
<p>Completely redesigned output reports and extended diagnostics make model evaluation and comparison easier and more insightful than ever. </p>
<p>New enhancements include:</p>
<ul>
<li><b>Expanded diagnostics</b> for quick assessment of model fit and underlying assumptions.</li>
<li><b>Clear, intuitive reports</b> that make it easy to compare multiple models side-by-side.</li>
<li><b>Improved readability</b>, to help identify key results and insights.</li>
</ul>
<h2 id="full-dataframe-integration">Full Dataframe Integration</h2>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Lag of independent variables
lag_vars = 2;

// Autoregressive order
order = 3;

// Call autoregmt function
call autoregFit(__FILE_DIR $+ "autoregmt.gdat", "Y ~ X1 + X2", lag_vars, order);</code></pre>
<p>Complete compatibility with GAUSS dataframes, simplifies the modeling workflow and ensures outputs are intuitive and easy to interpret.</p>
<ul>
<li><b>Automatic Variable Name Recognition:</b> Automatically detects and uses variable names, eliminating manual setup and saving time.</li>
<li><b>Effortless Date Management:</b> Intelligent handling of date formats and time spans for clearer output reports.</li>
<li><b>Clear, Interpretable Outputs:</b> Results are clearly labeled and easy to follow, helping boost productivity and reduce confusion.</li>
</ul>
<div style="text-align:center;background-color:#455560;color:#FFFFFF">
<hr>
<div class="lp-cta">
    <a href="https://www.aptech.com/request-demo/" class="btn btn-primary">Order TSMT Today!</a>
</div><hr>
</div>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/time-series-mt-4-0/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Predicting The Output Gap With Machine Learning Regression Models</title>
		<link>https://www.aptech.com/blog/predicting-the-output-gap-with-machine-learning-regression-models/</link>
					<comments>https://www.aptech.com/blog/predicting-the-output-gap-with-machine-learning-regression-models/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Wed, 12 Apr 2023 18:44:19 +0000</pubDate>
				<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11583590</guid>

					<description><![CDATA[In today's blog, we compare three different machine learning regression techniques for predicting U.S. real GDP output gap. We will use a combination of common economic indicators and GDP subcomponents to predict the quarterly GDP output gap. ]]></description>
										<content:encoded><![CDATA[    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script>
<h3 id="introduction">Introduction</h3>
<p>Economists are increasingly exploring the potential for machine learning models in economic forecasting. This blog offers an introduction to using three different machine learning regression techniques for economic modeling, using an empirical application to the real U.S. GDP output gap. </p>
<p>We look specifically at:</p>
<ul>
<li>Measuring the output gap. </li>
<li>The fundamentals of three machine learning regression models. </li>
<li>Model estimation using the <a href="https://docs.aptech.com/gauss/gml-landing.html" target="_blank" rel="noopener">GAUSS Machine Learning library</a>.</li>
</ul>
<h2 id="measuring-gdp-output-gap">Measuring GDP Output Gap</h2>
<p>The GDP output gap is a macroeconomic indicator that measures the difference between potential GDP and actual GDP. It is an interesting and useful economic statistic:</p>
<ul>
<li>It indicates whether the economy is operating with unemployment, inefficiencies, or inflationary pressures making it useful for policymaking. </li>
<li>The potential GDP is unobservable and must be estimated, with a large literature devoted to what is the best estimate of potential GDP. </li>
<li>Positive output gaps indicate that the economy is operating over potential GDP and at risk of inflation. </li>
<li>Negative output gaps indicate that the economy is operating below potential GDP and possibly in recession.  </li>
</ul>
<p>Our goal today is to demonstrate different machine learning regression techniques. For simplicity, we're going to use the output gap based on the <a href="https://fred.stlouisfed.org/series/GDPPOT" target="_blank" rel="noopener">Congressional Budget Office's estimate of real potential GDP</a> to train our model. </p>
<div class="alert alert-info" role="alert">We compute the output gap as the percent deviation of real U.S. GDP from the CBO's estimate of real potential GDP. Both components are available for download from the FRED database. </div>
<h2 id="the-models">The Models</h2>
<p>Today we will look at three machine learning models used specifically for predicting continuous data:</p>
<ul>
<li>Decision forest regression (also known as Random forest regression). </li>
<li>LASSO regression. </li>
<li>Ridge regression. </li>
</ul>
<h3 id="decision-forest-regression">Decision Forest Regression</h3>
<h4 id="decision-trees">Decision Trees</h4>
<p>Decision forest regression utilizes decision trees for continuous data which:</p>
<ol>
<li>Segment the data into subsets using data-based <em>splitting rules</em>. </li>
<li>Assign the average of the target variable within a subset as the prediction for all observations that fall inside that subset. </li>
</ol>
<p>To implement a single decision tree, a sample is split into segments using <em>recursive binary splitting</em>. This iterative approach determines where and how to split the data based on what leads to the lowest residual sum of squares (RSS).</p>
<h4 id="decision-forests">Decision Forests</h4>
<p>Single decision trees can have low, non-robust predictive power and suffer from high variance. This can be overcome using random decision forests that offer performance improvements by combining results from groups, or &quot;forests&quot;, of trees.</p>
<p>The random decision forest algorithm:</p>
<ol>
<li>Randomly chooses $m$ predictors to be used as candidates for splitting the data.</li>
<li>Constructs a decision tree from a bootstrapped training set. </li>
<li>Repeats the decision tree formation for a specified number of iterations. </li>
<li>Averages the results from all trees to make a final prediction.</li>
</ol>
<h3 id="lasso-and-ridge-regression">LASSO and Ridge Regression</h3>
<p>LASSO and ridge regression aim to reduce prediction variances using a modified least squares approach. Let's look a little more closely at how this works. </p>
<p>Recall that ordinary least squares estimates coefficients through the minimization of the residual sum of squares (RSS):</p>
<p>$$ RSS = \bigg[\sum_{i=1}^n (y_i - \beta_0 - \sum_{j=1}^p \beta_j x_{ij})\bigg]^2$$</p>
<p>Penalized least squares estimates coefficients using a modified function:</p>
<p>$$ S_{\lambda} = \bigg[\sum_{i=1}^n (y_i - \beta_0 - \sum_{j=1}^p \beta_j x_{ij})\bigg]^2 + \lambda J_2 $$</p>
<p>where $\lambda$ is the tuning parameter and $\lambda J_2$ is the penalty term. </p>
<table>
<tbody>
<tr><th>Method</th><th>Description</th><th>Penalty term</th></tr>
<tr><td>LASSO Regression</td><td>$L1$ penalized linear regression model.</td><td>$\lambda \sum_{j=1}^p |\beta_j|$</td></tr>
<tr><td>Ridge Regression</td><td>$L2$ penalized linear regression model.</td><td>$\lambda \sum_{j=1}^p \beta_j^2$</td></tr>
</tbody>
</table>
<h2 id="our-prediction-process">Our Prediction Process</h2>
<p>Our prediction process is motivated by the idea that as new information becomes available, it should be used to improve our forecasting model. </p>
<p>Based on this motivation, we use an expanding training window to make one-step ahead forecasts:</p>
<ul>
<li>Train the model using all observed data in the training window, features and output gap, up to time $t$.</li>
<li>Predict the output gap at time $t + 1$ using the observed features at time $t + 1$. </li>
<li>Expand the training window to include all observed data up to time $t + 1$.</li>
<li>Repeat model training and prediction. </li>
</ul>
<p>It's worth noting that while this method utilizes the most information available for prediction there is a trade-off in timeliness. If we were using this method in a real-world setting, it means we only forecast output gap one-quarter ahead. This may not be far enough in advance if we're using this forecast to guide business or investment decisions. </p>
<h2 id="predictors">Predictors</h2>
<p>Today we will use a combination of common economic indicators and GDP subcomponents as predictors. </p>
<table>
 <thead>
<tr><th>Variable</th><th>Description</th><th>Transformations</th></tr>
</thead>
<tbody>
<tr><td><a href="https://fred.stlouisfed.org/series/UMCSENT" target="_blank" rel="noopener">UMCSENT</a></td><td>University of Michigan consumer sentiment, quarterly average.</td><td>None</td></tr>
<tr><td><a href="https://fred.stlouisfed.org/series/UNRATE" target="_blank" rel="noopener">UNRATE</a></td><td>Civilian unemployment rate as a percentage, quarterly average.</td><td>None.</td></tr>
<tr><td>CR</td><td>The credit spread between <a href="https://fred.stlouisfed.org/series/BAAFFM" target="_blank" rel="noopener">Moody's BAA</a> and <a href="https://fred.stlouisfed.org/series/AAAFFM" target="_blank" rel="noopener">AAA</a> corporate bond yields.</td><td>None.</td></tr>
<tr><td>TS</td><td>The difference between the yield on the <a href="https://fred.stlouisfed.org/series/DGS10" target="_blank" rel="noopener">10-year treasury bond</a> and the <a href="https://fred.stlouisfed.org/series/DGS1" target="_blank" rel="noopener">1-yr treasury bill</a>.</td><td>None</td></tr>
<tr><td><a href="https://fred.stlouisfed.org/series/FEDFUNDS" target="_blank" rel="noopener">FEDFUNDS</a></td><td>The Federal Funds rate.</td><td>First differences.</td></tr>
<tr><td><a href="https://fred.stlouisfed.org/series/SP500" target="_blank" rel="noopener">SP500</a></td><td>The S&amp;P 500 index value at market closing.</td><td>Percent change, computed as difference in natural logs.</td></tr>
<tr><td><a href="https://fred.stlouisfed.org/series/CPIAUCSL" target="_blank" rel="noopener">CPIAUCSL</a></td><td>Consumer price index for all urban consumers.</td><td>Percent change, computed as difference in natural logs.</td></tr>
<tr><td><a href="https://fred.stlouisfed.org/series/INDPRO" target="_blank" rel="noopener">INDPRO</a></td><td>The industrial production (IP) index.</td><td>Percent change, computed as difference in natural logs.</td></tr>
<tr><td><a href="https://fred.stlouisfed.org/series/HOUST" target="_blank" rel="noopener">HOUST</a></td><td>New privately-owned housing unit starts.</td><td>Percent change, computed as difference in natural logs.</td></tr>
<tr><td>GAP_CH</td><td>The change in output gap.</td><td>None.</td></tr>
</tbody>
</table>
<p>For our model:</p>
<ul>
<li>All predictors are available from <a href="https://fred.stlouisfed.org/series/GDPPOT" target="_blank" rel="noopener">FRED</a> in levels.         </li>
<li>Monthly variables are aggregated to quarterly data using averages.</li>
<li>Four lags of all variables are included. </li>
</ul>
<h2 id="estimation-in-gauss">Estimation in GAUSS</h2>
<h3 id="data-loading">Data Loading</h3>
<p>Because we want to primarily focus on the models, rather than data cleaning, we don't go into the details of our data cleaning process here. Instead, the cleaned and prepped data is available for download <a href="https://github.com/aptech/gauss_blog/blob/master/machine-learning/ml-regressions/reg_data.gdat?raw=true" target="_blank" rel="noopener">here</a>. </p>
<div class="alert alert-info" role="alert">For more information about data cleaning and management try one of our earlier blogs such as:
<br>• <a href="https://www.aptech.com/blog/importing-fred-data-to-gauss/" target="_blank" rel="noopener">Importing FRED Data To GAUSS</a><br>• <a href="https://www.aptech.com/blog/getting-to-know-your-data-with-gauss-22/" target="_blank" rel="noopener">Getting to Know Your Data With GAUSS</a><br>• <a href="https://www.aptech.com/blog/preparing-and-cleaning-data-fred-data-in-gauss/" target="_blank" rel="noopener">Preparing And Cleaning FRED Data In GAUSS</a>
</div>
<p>Prior to estimating any model, we load the data and separate our outcome and feature data:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">library gml;
rndseed 23423;

/*
** Load data and prepare data
*/
// Load dataset
dataset = __FILE_DIR $+ "reg_data.gdat";
data = loadd(dataset);

// Trim rows from the top of data to account
// for lagged and differenced data
max_lag = 4;
data = trimr(data, max_lag + 1, 0);

/*
** Extract outcome and features
*/
// Extract outcome variable
y = data[., "CBO_GAP"];

// Extract features
X = delcols(data, "date"$|"CBO_GAP");
</code></pre>
<h3 id="general-one-step-ahead-process">General One-Step-Ahead Process</h3>
<p>The full data sample ranges from 1967Q1 to 2022Q4. We'll start computing one-step-ahead forecasts in 1995Q1, using an initial training period of 1967Q1 to 1994Q4. </p>
<p>To implement the expanding window one-step-ahead forecasts, we use a GAUSS <a href="https://docs.aptech.com/gauss/dowhiledountil.html" target="_blank" rel="noopener"><code>do while</code></a> loop:  </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Specify starting date 
st_date = asDate("1994-Q4", "%Y-Q%q");

// Find the index of 'st_date'
st_indx = indnv(st_date, data[., "date"]); 

// Iterate over remaining observations
// using expanding window to fit model
do while st_indx &lt; rows(x)-1;

    // Get y_train and x_train
    y_train = y[1:st_indx];
    x_train = X[1:st_indx, .]; 
    x_test = X[st_indx+1, .];

    // Fit model
    ...

    // Compute one-step-ahead prediction
    ...

    // Update st_indx
    st_indx = st_indx + 1;
endo;</code></pre>
<h3 id="model-and-prediction-procedures">Model and Prediction Procedures</h3>
<p>The GAUSS machine learning library offers all the procedures we need for our model training and prediction. </p>
<table>
<tbody>
<tr><th>Model</th><th>Fitting Procedure</th><th>Prediction Procedure</th></tr>
<tr><td>Decision Forest</td><td><a href="https://docs.aptech.com/gauss/decforestrfit.html" target="_blank" rel="noopener">decForestRFit</a></td><td><a href="https://docs.aptech.com/gauss/decforestpredict.html" target="_blank" rel="noopener">decForestPredict</a></td></tr>
<tr><td>LASSO Regression</td><td><a href="https://docs.aptech.com/gauss/lassofit.html" target="_blank" rel="noopener">lassoFit</a></td><td><a href="https://docs.aptech.com/gauss/lmpredict.html" target="_blank" rel="noopener">lmPredict</a></td></tr>
<tr><td>Ridge Regression</td><td><a href="https://docs.aptech.com/gauss/lassofit.html" target="_blank" rel="noopener">ridgeFit</a></td><td><a href="https://docs.aptech.com/gauss/lmpredict.html" target="_blank" rel="noopener">lmPredict</a></td></tr>
</tbody>
</table>
<p>To simplify our code we will will use three <a href="https://www.aptech.com/blog/basics-of-gauss-procedures/" target="_blank" rel="noopener">GAUSS procedures</a> that combine the fitting and prediction for each method. </p>
<p>We define one procedure for the one-step ahead prediction for the LASSO model:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">proc (1) = osaLasso(y_train, x_train, x_test, lambda);
    local lasso_prediction;

    /*
    ** Lasso Model
    */
    // Declare 'mdl' to be an instance of a
    // lassoModel structure to hold the estimation results
    struct lassoModel mdl;

    // Estimate the model with default settings
    mdl = lassoFit(y_train, x_train, lambda);

    // Make predictions using test data
    lasso_prediction = lmPredict(mdl, x_test);

    retp(lasso_prediction);
endp;</code></pre>
<p>The second procedure performs fitting and prediction for the ridge model:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">proc (1) = osaRidge(y_train, x_train, x_test, lambda);
    local ridge_prediction;

    /*
    ** Ridge Model
    */
    // Declare 'mdl' to be an instance of a
    // ridgeModel structure to hold the estimation results
    struct ridgeModel mdl;

    // Estimate the model with default settings
    mdl = ridgeFit(y_train, x_train, lambda);

    // Make predictions using test data
    ridge_prediction = lmPredict(mdl, x_test);

    retp(ridge_prediction);
endp;</code></pre>
<p>The final procedure performs fitting and prediction for the decision forest model:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">proc (1) = osaDF(y_train, x_train, x_test, struct dfControl dfc);
    local df_prediction;

    /*
    ** Decision Forest Model
    */
    // Declare 'mdl' to be an instance of a
    // dfModel structure to hold the estimation results
    struct dfModel mdl;

    // Estimate the model with default settings
    mdl = decForestRFit(y_train, x_train, dfc);

    // Make predictions using test data
    df_prediction = decForestPredict(mdl, x_test);

    retp(df_prediction);
endp;</code></pre>
<h3 id="computing-predictions">Computing Predictions</h3>
<p>Finally we are ready to begin computing our predictions. First, we set the necessary tuning parameters:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Set up tuning parameters
*/

// L2 and L1 regularization penalty
lambda = 0.3;

/*
** Settings for decision forest
*/
// Use control structure for settings
struct dfControl dfc;
dfc = dfControlCreate();

// Turn on variable importance
dfc.variableImportanceMethod = 1;

// Turn on out-of-bag error calculation
dfc.oobError = 1;</code></pre>
<div class="alert alert-info" role="alert">We used a λ of 0.3 for both the ridge and LASSO models and all GAUSS default settings for decision forest hyperparameters. This brings to light the fact that we have not taken any steps to optimize our models. The topic of model selection and optimization will be covered in later blogs. </div>
<p>Next, we initialize the starting point for our loop and our prediction storage matrix. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Initialize starting point and
** storage matrix for expanding 
** window loop
*/
st_date = asDate("1994-Q4", "%Y-Q%q");
st_indx = indnv(st_date, data[., "date"]);

// Set up storage dataframe for predictions
// using one column for each model
osa_pred = asDF(zeros(rows(X), 3), "LASSO", "Ridge", "Decision Forest");</code></pre>
<p>Finally, we implement our expanding window <code>do while</code> loop:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">do while st_indx &lt; rows(X)-1;

    // Get y and x subsets for
    // fitting and prediction
    y_train = Y[1:st_indx];
    X_train = X[1:st_indx, .]; 
    X_test = X[st_indx+1, .];

    // LASSO Model
    osa_pred[st_indx+1, "LASSO"] = osaLasso(y_train, X_train, X_test, lambda);

    // Ridge Model
    osa_pred[st_indx+1, "Ridge"] = osaRidge(y_train, X_train, X_test, lambda);

    // Decision Forest Model
    osa_pred[st_indx+1, "Decision Forest"] = osaDF(y_train, X_train, X_test, dfc);

    // Update st_indx
    st_indx = st_indx + 1;
endo;</code></pre>
<h2 id="results">Results</h2>
<h3 id="prediction-visualization">Prediction Visualization</h3>
<p><a href="https://www.aptech.com/wp-content/uploads/2023/08/lr-prediction-comparisons.jpg"><img src="https://www.aptech.com/wp-content/uploads/2023/08/lr-prediction-comparisons.jpg" alt="Comparison of output gap predictions using LASSO, ridge, and decision forest regression. " width="800" height="600" class="aligncenter size-full wp-image-11584010" /></a></p>
<p>The graph above plots the predictions from all three of our models against the actual CBO implied output gap. There are a few things worth noting about these results:</p>
<ul>
<li>All three models fail to predict the output decline associated with start of the COVID pandemic. This isn't a surprise as the onset of COVID was a hard to predict shock to the economy. </li>
<li>The models underestimate the persistent effects of the 2008 global financial crisis. While all three trend in the same direction as the observed output gap, they all predict better economic performance than actually obtained. This tells us that our feature set doesn't contain the information needed to capture the ongoing effects of the financial crisis. We could potentially improve our model by incorporating more features like bank balances or home foreclosures. </li>
<li>The ridge model overestimates the short-term impacts of the 2008 global financial crisis, predicting a larger drop in the output gap than both the other models and the actual output gap.</li>
</ul>
<div class="alert alert-info" role="alert">To learn more about formatting GAUSS plots see our <a href="https://www.aptech.com/blog/category/graphics/">GAUSS graphics blogs</a>. </div>
<h3 id="model-performance">Model Performance</h3>
<p>We can also compare the performance of our models using the mean squared error (MSE). This can easily be calculated from our predictions and our observed output gap:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Computing MSE
*/
// Compute residuals
residuals = osa_pred - y;

// Filter for prediction window
residuals = selif(residuals, data[., "date"] .&gt;= st_date);

// Compute the MSE for prediction window
mse  = meanc((residuals).^2);</code></pre>
<p>A comparison of the MSE shows that models perform similarly, with our decision forest model offering a slight advantage in MSE over LASSO and ridge. </p>
<table style="width:35%;margin-left:auto;margin-right:auto;">
<tbody>
<tr><th style="width:60%">Model</th><th>MSE</th></tr>
<tr><td>LASSO</td><td>2.08</td></tr>
<tr><td>Ridge</td><td>2.36</td></tr>
<tr><td>Decision Forest</td><td>1.80</td></tr>
</tbody>
</table>
<h3 id="conclusion">Conclusion</h3>
<p>In today's blog we examined the performance of several machine learning regression models used to predict output gap. This blog is meant to provide an introduction to these models and leaves room to discuss model selection and optimization in future blogs. </p>
<p>After today's blog, you should have a better understanding of:</p>
<ul>
<li>The foundations of decision forest regression models.</li>
<li>LASSO and ridge regression models.</li>
<li>How machine learning models can be used to help predict economic and financial outcomes.</li>
</ul>
<h3 id="further-machine-learning-reading">Further Machine Learning Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/fundamentals-of-tuning-machine-learning-hyperparameters/" target="_blank" rel="noopener">Fundamentals of Tuning Machine Learning Hyperparameters</a>  </li>
<li><a href="https://www.aptech.com/blog/applications-of-principal-components-analysis-in-finance/" target="_blank" rel="noopener">Applications of Principal Components Analysis in Finance</a>  </li>
<li><a href="https://www.aptech.com/blog/predicting-the-output-gap-with-machine-learning-regression-models/" target="_blank" rel="noopener">Predicting The Output Gap With Machine Learning Regression Models</a>  </li>
<li><a href="https://www.aptech.com/blog/classification-with-regularized-logistic-regression/" target="_blank" rel="noopener">Classification with Regularized Logistic Regression</a>  </li>
<li><a href="https://www.aptech.com/blog/understanding-cross-validation/" target="_blank" rel="noopener">Understanding Cross-Validation</a>  </li>
<li><a href="https://www.aptech.com/blog/machine-learning-with-real-world-data/" target="_blank" rel="noopener">Machine Learning With Real-World Data</a>  </li>
</ol>
<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/predicting-the-output-gap-with-machine-learning-regression-models/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Importing FRED Data to GAUSS</title>
		<link>https://www.aptech.com/blog/importing-fred-data-to-gauss/</link>
					<comments>https://www.aptech.com/blog/importing-fred-data-to-gauss/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Fri, 16 Dec 2022 02:05:10 +0000</pubDate>
				<category><![CDATA[Best Practices]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11583079</guid>

					<description><![CDATA[The GAUSS FRED database integration, introduced in GAUSS 23, is a time-saving feature that allows you to import FRED data directly into GAUSS. This means you have thousands of datasets at your fingertips without ever leaving GAUSS. These tools also ensure that FRED data is imported directly into a GAUSS dataframe format, which can eliminate hours of data cleaning and the headaches that come with it. 

In today's blog, we will learn how to use the FRED import tools to:
<ul>
<li> Search for a FRED data series. </li>
<li> Import FRED data to GAUSS, including merging multiple series. </li>
<li>Use advanced import tools to perform data transformations. </li>
</ul>]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>The GAUSS <a href="https://fred.stlouisfed.org/" target="_blank" rel="noopener">FRED</a> database integration, introduced in <a href="https://www.aptech.com/blog/gauss23/" target="_blank" rel="noopener">GAUSS 23</a>, is a time-saving feature that allows you to import FRED data directly into GAUSS. This means you have thousands of datasets at your fingertips without ever leaving GAUSS. These tools also ensure that FRED data is imported directly into a GAUSS dataframe format, which can eliminate hours of data cleaning and the headaches that come with it. </p>
<p>In today's blog, we will learn how to use the FRED import tools to:</p>
<ul>
<li>Search for a FRED data series. </li>
<li>Import FRED data to GAUSS, including merging multiple series. </li>
<li>Use advanced import tools to perform data transformations. </li>
</ul>
<h2 id="getting-started">Getting Started</h2>
<h3 id="requesting-an-api-key">Requesting an API Key</h3>
<p>Prior to importing any data from FRED using GAUSS you will need to request an API key from FRED. This can be done on the <a href="https://fredaccount.stlouisfed.org/apikeys" target="_blank" rel="noopener">FRED API Request page</a>. To request an API key you will need:</p>
<ol>
<li>To create and/or login to a FRED account. </li>
<li>Provide a brief description of the program you intend to write. This can be simple such as, &quot;Using GAUSS to conduct economic research.&quot;</li>
</ol>
<h3 id="specifying-your-api-key-in-gauss">Specifying your API key in GAUSS</h3>
<p>You can set your API in GAUSS using any of the following methods:</p>
<ol>
<li>Set the API key directly at the top of your program:
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">FRED_API_KEY = "your_api_key"</code></pre></li>
<li>Set the environment variable <code>FRED_API_KEY</code> to your API key.</li>
<li>Edit your gauss.cfg and modify the <code>fred_api_key</code> value:
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">fred_api_key = your_api_key</code></pre></li>
</ol>
<h2 id="finding-your-fred-series">Finding Your FRED Series</h2>
<p>In order to download a series directly from FRED, we will need to know the series ID. However, this may not be something you know right offhand. Fortunately, we can use the <a href="https://docs.aptech.com/gauss/fred_search.html" target="_blank" rel="noopener"><code>fred_search</code></a> procedure to find the proper series ID.</p>
<p>The <code>fred_search</code> procedure requires one input, a string specifying the search text. As an example, let's search for all series related to <code>"producer price index"</code>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">fred_search("producer price index");</code></pre>
<p>This prints a search report to the command window. The first five rows are:</p>
<pre>frequency  frequency_short group_popularity              id     last_updated  observation_end observation_star       popularity     realtime_end   realtime_start seasonal_adjustm seasonal_adjustm            title            units      units_short
Monthly                 M        80.000000           PPIACO 2022-11-15 07:52       2022-10-01       1913-01-01        80.000000       2022-11-23       2022-11-23 Not Seasonally A              NSA Producer Price I   Index 1982=100   Index 1982=100
Monthly                 M        79.000000          WPU0911 2022-11-15 07:52       2022-10-01       1926-01-01        79.000000       2022-11-23       2022-11-23 Not Seasonally A              NSA Producer Price I   Index 1982=100   Index 1982=100
Monthly                 M        79.000000            PCEPI 2022-10-28 08:40       2022-09-01       1959-01-01        78.000000       2022-11-23       2022-11-23 Seasonally Adjus               SA Personal Consump   Index 2012=100   Index 2012=100
Monthly                 M        78.000000  PCU325211325211 2022-11-15 07:55       2022-10-01       1976-06-01        78.000000       2022-11-23       2022-11-23 Not Seasonally A              NSA Producer Price I Index Dec 1980=1 Index Dec 1980=1 </pre>
<p>We can see that the FRED search report provides a thorough summary of related series. In addition to the <code>id</code>, which we will need to import the data from FRED, some other useful fields include:</p>
<ul>
<li>Frequency.</li>
<li>Popularity.</li>
<li>Last updated.</li>
<li>Observation end.</li>
<li>Observation start.</li>
<li>Seasonal adjustment status.</li>
<li>Units. </li>
</ul>
<p>For our next steps, let's use the <code>PPIACO</code> series, which is the highest popularity series related to the search term <code>Producer Price Index</code>.</p>
<div class="alert alert-info" role="alert">Note: A number of advanced search options are available and can be read about in the official documentation for the <code>fred_search</code></div>
<h2 id="importing-data-from-fred">Importing Data From FRED</h2>
<h3 id="loading-a-single-series-from-fred">Loading A Single Series From FRED</h3>
<p>Next, we will import the <code>PPIACO</code> series from the FRED database into GAUSS using the <a href="https://docs.aptech.com/gauss/fred_load.html" target="_blank" rel="noopener"><code>fred_load</code></a> procedure. </p>
<p>The <code>fred_load</code> procedure requires one string input specifying the series ID to be loaded. To load the producer price data that we found with our FRED search, we will use the series ID <code>PPIACO</code>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Download all observations of 'PPIACO' into a GAUSS dataframe
PPI = fred_load("PPIACO");</code></pre>
<p>We can examine the first five rows of the <code>PPI</code> dataframe using the <code>head</code> procedure:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Print the first 5 rows of 'PPI'
head(PPI);</code></pre>
<p>which reports</p>
<pre>            date           PPIACO
      1913-01-01        12.100000
      1913-02-01        12.000000
      1913-03-01        12.000000
      1913-04-01        12.000000
      1913-05-01        11.900000 </pre>
<p>We can also use the <code>tail</code> procedure to examine the last 5 rows of the <code>PPI</code> dataframe:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Print the last 5 rows of 'PPI'
tail(PPI);</code></pre>
<pre>            date           PPIACO
      2022-06-01        280.25100
      2022-07-01        272.27800
      2022-08-01        269.46500
      2022-09-01        268.69300
      2022-10-01        265.19300
</pre>
<p>This shows us that the <code>PPIACO</code> data ranges from January, 1913 to October, 2022. Which is consistent with the observation start and end date reported in our FRED search. </p>
<h3 id="loading-multiple-series-from-fred">Loading Multiple Series From FRED</h3>
<p>The <code>fred_load</code> procedure can also be used to load multiple series from FRED simultaneously. To do this, we use a GAUSS <a href="https://www.aptech.com/resources/tutorials/loading-variables-from-a-file/" target="_blank" rel="noopener">formula string syntax</a>, using <code>+</code> to add additional series IDs to our formula string.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Load producer price
// and treasury bond data
macro_data = fred_load("PPIACO + T10Y2Y");

// Preview data
head(macro_data);</code></pre>
<p>The preview of our data shows that our two series have been imported together and automatically merged by date:</p>
<pre>            date           PPIACO           T10Y2Y
      1913-01-01        12.100000                .
      1913-02-01        12.000000                .
      1913-03-01        12.000000                .
      1913-04-01        12.000000                .
      1913-05-01        11.900000                . </pre>
<p>However, the preview doesn't necessarily give us reassurance that <code>T10Y2Y</code> was loaded properly because the values for the first five observations are all missing. Let's take a quick look at some summary statistics using <a href="https://docs.aptech.com/gauss/dstatmt.html" target="_blank" rel="noopener"><code>dstatmt</code></a>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Compute and print descriptive statistics
// for all variables in 'macro_data'
dstatmt(macro_data);</code></pre>
<p>This prints a summary table to our <strong>Command Window</strong>:</p>
<pre>-----------------------------------------------------------------------------
Variable    Mean   Std Dev  Variance     Minimum     Maximum   Valid  Missing
-----------------------------------------------------------------------------

date       -----     -----     -----  1913-01-01  2022-11-25   13048        0
PPIACO     74.57      66.3      4396        10.3       280.3    1318    11730
T10Y2Y    0.9146     0.903    0.8155       -2.41        2.91   11619     1429 </pre>
<p>From this, we can tell that both series have been imported properly. However, they have different ranges, with both series having a number of missing values. </p>
<h3 id="plotting-a-fred-series">Plotting a FRED Series</h3>
<p>It could be useful to view our FRED data before importing it into the GAUSS workspace. This can be done using the <code>fred_load</code> procedure with the <a href="https://docs.aptech.com/gauss/plotxy.html" target="_blank" rel="noopener"><code>plotXY</code></a>.</p>
<p>To do this, we need to remember the dataframe returned from <code>fred_load</code> will always contain:</p>
<ul>
<li>A date variable named, <code>date</code></li>
<li>A variable for every series loaded named with the <code>seriesID</code> </li>
</ul>
<p>As an example, let's consider viewing the FRED S&amp;P 500 series with the series ID <code>sp500</code>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">plotXY(fred_load("sp500"), "sp500 ~ date");</code></pre>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/12/g23-fred-sp500.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/12/g23-fred-sp500.jpg" alt="" width="600" height="500" class="alignnone size-full wp-image-11583235" /></a></p>
<h2 id="advanced-import-tools">Advanced Import Tools</h2>
<p>One of most useful features of the GAUSS FRED import tools is that they can perform a number of data cleaning tasks at the time of import. In this section, we will look at how to use the FRED import tools to:</p>
<ul>
<li>Filter dates. </li>
<li>Aggregate data.</li>
<li>Perform data transformations. </li>
</ul>
<h3 id="the-fred-parameter-list">The FRED Parameter List</h3>
<p>GAUSS FRED functions use a parameter list for passing advanced settings. This list is constructed using the <a href="https://docs.aptech.com/gauss/fred_set.html" target="_blank" rel="noopener"><code>fred_set</code></a> function. </p>
<p>The <code>fred_set</code> function creates a running list of parameters you want to pass to the FRED functions. It is specified by first listing a parameter name, then the associated parameter value. </p>
<p>For example:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Create a FRED parameter list with
// 'frequency' set to 'q' (quarterly)
params_GDP = fred_set("frequency", "q");</code></pre>
<p>If we wish to add additional parameters values we can update an existing parameter list:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set 'aggregation_method' to end-of-period
// in the previously created parameter list 'params_GDP'
params_GDP = fred_set("aggregation_method", "eop", params_GDP);</code></pre>
<p>Or we can specify all parameters at the same time:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Create a FRED parameter list with 2 settings at once.
params_GDP = fred_set("frequency", "q", "aggregation_method", "eop");</code></pre>
<p>There are a few things to note about the parameter list:</p>
<ol>
<li>The parameter specifications are case sensitive. </li>
<li>Order does not matter, with the exception that each parameter should be directly followed by its associated value. For example, we could have also specified </li>
</ol>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">params_GDP = fred_set("aggregation_method", "eop", "frequency", "q");</code></pre>
<p>Next, we'll look at how to use the parameter list for advanced FRED data import. </p>
<h3 id="filtering-dates">Filtering Dates</h3>
<p>The <code>observation_start</code> and/or <code>observation_end</code> parameters can be used to filter the range of imported data. </p>
<p>For example, suppose we are interested in loading seasonally adjusted CPI data for all dates after 1971. Let's start by searching for the series ID we want to load:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Read series information from FRED and print first 5 rows
head(fred_search("consumer price index seasonally adjusted"));</code></pre>
<pre>       frequency  frequency_short group_popularity               id     last_updated            notes  observation_end observation_star       popularity     realtime_end   realtime_start seasonal_adjustm seasonal_adjustm            title            units      units_short
         Monthly                M        95.000000         CPIAUCSL 2022-11-10 07:38 The Consumer Pri       2022-10-01       1947-01-01        94.000000       2022-11-28       2022-11-28 Seasonally Adjus               SA Consumer Price I Index 1982-1984= Index 1982-1984=
         Monthly                M        95.000000         CPIAUCNS 2022-11-10 07:38 Handbook of Meth       2022-10-01       1913-01-01        71.000000       2022-11-28       2022-11-28 Not Seasonally A              NSA Consumer Price I Index 1982-1984= Index 1982-1984=
      Semiannual               SA        95.000000      CUUS0000SA0 2022-07-13 07:37                .       2021-01-01       1913-01-01        38.000000       2022-11-28       2022-11-28 Not Seasonally A Consumer Price I Inflation, consu          Percent Index 1982-1984=
          Annual                A        84.000000   FPCPITOTLZGUSA 2022-05-03 14:01 Inflation as mea       2021-01-01       1960-01-01        84.000000       2022-11-28       2022-11-28 Not Seasonally A              NSA Inflation, consu          Percent                %
         Monthly                M        83.000000  CPALTT01USM657N 2022-11-14 14:25 OECD descriptor        2022-09-01       1960-01-01        80.000000       2022-11-28       2022-11-28 Not Seasonally A              NSA Consumer Price I Growth rate prev Growth rate prev </pre>
<p>It looks like the best series for us to use is &quot;CPIAUCSL&quot;. However, this series starts in January 1947. </p>
<p>We can tell GAUSS to only import data starting from 1971 by setting the <code>observation_start</code> parameter to <code>"1971-01-01"</code> using the <code>fred_set</code> procedure:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set observation_start parameter
// to use all data on or after 1971-01-01
params_cpi = fred_set("observation_start", "1971-01-01");</code></pre>
<p>Now we can load our CPI data using <code>fred_load</code> with two inputs:</p>
<ol>
<li>The series ID.</li>
<li>The parameter list, <code>params_cpi</code>.</li>
</ol>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Load data using a parameter list
cpi_m = fred_load("CPIAUCSL", params_cpi);

// Preview first 5 rows of data
head(cpi_m);</code></pre>
<p>Our data preview shows that the imported data starts on January 1, 1971:</p>
<pre>            date         CPIAUCSL
      1971-01-01        39.900000
      1971-02-01        39.900000
      1971-03-01        40.000000
      1971-04-01        40.100000
      1971-05-01        40.300000 </pre>
<h3 id="aggregating-data">Aggregating Data</h3>
<p>Next, suppose we want to aggregate our data from monthly to quarterly data. The FRED import tools provide a convenient way to do this at the time of import using the <code>frequency</code> parameter. </p>
<p>The <code>frequency</code> parameter allows you to specify the frequency of data you would like. The specified frequency can only be the same or lower than the frequency of the original series. </p>
<p>Frequency options include:</p>
<table>
 <thead>
<tr><th>Specifier</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>"d"</td><td>Daily</td></tr>
<tr><td>"w"</td><td>Weekly</td></tr>
<tr><td>"bw"</td><td>Biweekly</td></tr>
<tr><td>"m"</td><td>Monthly</td></tr>
<tr><td>"q"</td><td>Quarterly</td></tr>
<tr><td>"sa"</td><td>Semiannual</td></tr>
<tr><td>"a"</td><td>Annual</td></tr>
</tbody>
</table>
<p>The default aggregation method is to use averaging. However, the <code>aggregation_method</code> parameter can be used to specify an aggregation method. Aggregation options include:</p>
<table>
 <thead>
<tr><th>Specifier</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>"avg"</td><td>Average</td></tr>
<tr><td>"sum"</td><td>Sum</td></tr>
<tr><td>"eop"</td><td>End of Period</td></tr>
</tbody>
</table>
<p>Let's use the <code>frequency</code> parameter to aggregate the monthly &quot;CPIAUCSL&quot; series to quarterly observations. We will also use the <code>aggregation_method</code> to specify that end-of-period aggregation is used:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set parameter list
// Include previously specified
// parameter list to append new specifications
params_cpi = fred_set("frequency", "q", "aggregation_method", "eop", params_cpi);

// Load quarterly CPI
cpi_q_eop  = fred_load("CPIAUCSL", params_cpi);

head(cpi_q_eop);</code></pre>
<pre>            date         CPIAUCSL
      1971-01-01        40.000000
      1971-04-01        40.500000
      1971-07-01        40.800000
      1971-10-01        41.100000
      1972-01-01        41.400000</pre>
<p>The <code>cpi_q_eop</code> dataframe now contains quarterly data starting in January 1971. </p>
<h3 id="transformations">Transformations</h3>
<p>Finally, suppose we want to use our CPI data to study <a href="https://www.aptech.com/blog/understanding-state-space-models-an-inflation-example/" target="_blank" rel="noopener">inflation</a>. With the FRED import tools, we can do this using the <code>units</code> parameter with the <code>fred_load</code> procedure. </p>
<p>The <em>units</em> options include:</p>
<table>
 <thead>
<tr><th>Specifier</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>"lin"</td><td>Levels (no transformation).</td></tr>
<tr><td>"chg"</td><td>Change.</td></tr>
<tr><td>"ch1"</td><td>Change from one year ago.</td></tr>
<tr><td>"pch"</td><td>Percent change.</td></tr>
<tr><td>"pc1"</td><td>Percent change from one year ago.</td></tr>
<tr><td>"pca"</td><td>Compounded annual rate of change.</td></tr>
<tr><td>"cch"</td><td>Continuously compounded rate of change.</td></tr>
<tr><td>"cca"</td><td>Continuously compounded annual rate of change.</td></tr>
<tr><td>"log"</td><td>Natural log.</td></tr>
</tbody>
</table>
<p>Let's update our <code>params_cpi</code> parameter list and import the percent change of &quot;CPIAUCSL&quot; from a year ago. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set params
params_cpi = fred_set("units", "pc1", params_cpi);

// Load quarterly CPI
infl_q  = fred_load("CPIAUCSL", params_cpi);
plotXY(infl_q,  "CPIAUCSL ~ date");</code></pre>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/12/g23-fred-cpiaucsl.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/12/g23-fred-cpiaucsl.jpg" alt="Graph of CPI data." width="600" height="400" class="aligncenter size-full wp-image-11583255" /></a></p>
<h2 id="conclusion">Conclusion</h2>
<p>In today's blog, we saw how the GAUSS FRED integration introduced in GAUSS 23 can save you time and effort when it comes to working with FRED data. </p>
<p>We learned how to use the FRED import tools to:</p>
<ul>
<li>Search for a FRED data series. </li>
<li>Import FRED data to GAUSS, including merging multiple series. </li>
<li>Use advanced import tools to perform data transformations. </li>
</ul>
<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/importing-fred-data-to-gauss/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Addressing Conditional Heteroscedasticity in SVAR Models</title>
		<link>https://www.aptech.com/blog/addressing-conditional-heteroscedasticity-in-svar-models/</link>
					<comments>https://www.aptech.com/blog/addressing-conditional-heteroscedasticity-in-svar-models/#respond</comments>
		
		<dc:creator><![CDATA[Jamel]]></dc:creator>
		<pubDate>Wed, 14 Sep 2022 17:30:12 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11582836</guid>

					<description><![CDATA[Structural VAR models are powerful tools in macroeconomic time series modeling. However, given their vast applications, it is important that they are properly implemented to address the characteristics of their underlying data. 

In today’s blog, we build on our previous discussions of SVAR models to examine the use of SVAR in the special case of conditional heteroscedasticity. 

We will look more closely at:
<ul>
<li>Conditional heteroscedasticity.</li>
<li>The impacts of conditional heteroscedasticity on SVAR models.</li>
<li>Estimating structural impulse response functions (SIRF) in the presence of conditional heteroscedasticity.</li>
<li> An application to the global oil market. </li>
</ul>]]></description>
										<content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>Structural VAR models are powerful tools in macroeconomic time series modeling. However, given their vast applications, it is important that they are properly implemented to address the characteristics of their underlying data. </p>
<p>In today’s blog, we build on our previous discussions of SVAR models to examine the use of SVAR in the special case of conditional heteroscedasticity. </p>
<p>We will look more closely at:</p>
<ul>
<li>Conditional heteroscedasticity.</li>
<li>The impacts of conditional heteroscedasticity on SVAR models.</li>
<li>Estimating structural impulse response functions (SIRF) in the presence of conditional heteroscedasticity.</li>
<li>An application to the global oil market. </li>
</ul>
<h2 id="what-is-conditional-heteroscedasticity">What is Conditional Heteroscedasticity?</h2>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/09/wilshire.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/09/wilshire.jpg" alt="Example of conditional heteroscedasticity." width="800" height="600" class="alignnone size-full wp-image-11582927" /></a></p>
<p>Simply put, <a href="https://www.aptech.com/resources/tutorials/econometrics/ols-diagnostics-heteroscedasticity/" target="_blank" rel="noopener">heteroscedasticity</a> is defined as a non-constant variance for the residuals of a model. As noted by <a href="https://www.economica.fr/econometrie-theorie-et-applications-2e-ed-c2x36878469" target="_blank" rel="noopener">Mignon (2022, chapter 4)</a>, when the errors are non-spherical, heteroscedasticity may come from:</p>
<ul>
<li>Heterogeneity of the sample.</li>
<li><a href="https://www.aptech.com/resources/tutorials/econometrics/ols-diagnostics-model-specification/" target="_blank" rel="noopener">Missing variables.</a></li>
<li>Asymmetric distributions of the variables (distributions of income and wealth, for example).</li>
<li>Improper transformation of the variables (linear instead of log-linear).</li>
<li>Nature of the data (averaged observations coming from different samples). </li>
</ul>
<p>Conditional heteroscedasticity is a particular form of heteroscedasticity. Many macroeconomic and finance applications, such as daily financial <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">time series</a> or monthly growth rates, exhibit periods of low volatility and other periods of high volatility. These clusters of volatility are also known as conditional heteroscedasticity because one period’s volatility is related to previous periods' volatility. </p>
<h2 id="what-are-the-implications-of-conditional-heteroscedasticity">What are the Implications of Conditional Heteroscedasticity?</h2>
<p>Linear regression models assume that there is no heteroscedasticity, and the presence of any type of heteroscedasticity can impact the validity of regression results if not accounted for properly. </p>
<p>In particular, while the <a href="https://www.aptech.com/resources/tutorials/formula-string-syntax/ols-regression-from-a-dataset/" target="_blank" rel="noopener">OLS</a> coefficient estimates will be unbiased:</p>
<ul>
<li>OLS estimators are no longer efficient. </li>
<li>Covariance matrix of the coefficients will be inconsistent. </li>
<li>Standard inferences will be incorrect.</li>
</ul>
<h2 id="what-are-the-consequences-for-inference-on-structural-impulse-response-functions-sirfs">What are the Consequences for Inference on Structural Impulse Response Functions (SIRFs)?</h2>
<p><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-vector-autoregressive-models/" target="_blank" rel="noopener">Recursive VAR systems</a> are highly common in empirical work in macroeconomics and finance. However, as previously noted, macroeconomic and financial data can often suffer from conditional heteroscedasticity. For this reason, it is important to consider the impacts of conditional heteroscedasticity in the context of structural VAR.</p>
<p>In SVAR models, <a href="https://www.aptech.com/blog/the-intuition-behind-impulse-response-functions-and-forecast-error-variance-decomposition/" target="_blank" rel="noopener">impulse response functions</a> are often a more important result than the estimated parameters. They help us to better understand the dynamic relationship between the variables in our model.</p>
<p>Because SIRFs are widely used for analysis, it’s imperative to have a clear understanding of the uncertainty in their estimations. This is widely done using some type of <a href="https://www.aptech.com/blog/basic-bootstrapping-in-gauss/" target="_blank" rel="noopener">bootstrapped</a> confidence intervals which provide insight into the statistical likelihood of a SIRF.</p>
<table>
 <thead>
<tr><th>Method</th><th>Description</th></tr>
</thead>
<tbody>
<tr><th>Pairwise bootstrap</th><td>Draws samples from the joint distribution of the dependent and independent variables.</td></tr>
<tr><th>Wild bootstrap</th><td>A new sample is generated by multiplying the prediction residual by a random variable and adding it to the prediction.</td></tr>
<tr><th>Moving block bootstrap</th><td>Data is split into overlapping blocks of length ???? and a specified number of blocks are drawn at random, with replacement from the portioned data.   </td></tr>
</tbody>
</table>
<p>In the context of bootstrapped SIRFs, when not properly accounted for, conditional heteroscedasticity can lead to distorted inferences. However, not all bootstrap methods perform equally when facing conditional heteroscedasticity. </p>
<p>Brüggemann et al. (2016) have shown that in the presence of conditional heteroscedasticity:</p>
<ul>
<li>The wild and pairwise bootstraps underestimate the true asymptotic variances.</li>
<li>The corresponding confidence intervals with pairwise and wild bootstraps are typically too narrow and miss the true value of the SIRF.</li>
</ul>
<p>The main message, here, is that the pairwise bootstrap and the wild bootstrap underestimate the actual estimation uncertainty in the presence of conditional heteroscedasticity. If you detect conditional heteroscedasticity in your data, the MBB is better than the pairwise and wild bootstrap to estimate the true degree of uncertainty in the SIRF. </p>
<h2 id="what-is-moving-block-bootstrap">What is Moving Block Bootstrap?</h2>
<p>The basic idea of the bootstrap rests on the assumption that the simulated data sample that we use does a good job representing the actual population data. The moving block bootstrap does this by:</p>
<ol>
<li>Generating consecutive, overlapping “blocks” of the original sample.</li>
<li>Drawing a specified number of “blocks” at random, with replacement. </li>
<li>Combine the sampled blocks to create a complete bootstrapped sample.  </li>
</ol>
<p>This method of sampling is particularly relevant for time series data because it maintains the relationship between neighboring observations. </p>
<h2 id="an-application-to-the-effects-of-us-china-political-tensions-on-the-oil-market">An Application to the Effects of US-China Political Tensions on the Oil Market</h2>
<p>As an example application of MBB in the SVAR framework, we’ll look at the newly released paper of <a href="https://ideas.repec.org/a/eee/eneeco/v114y2022ics0140988322003498.html" target="_blank" rel="noopener">Cai, Mignon, and Saadaoui (2022)</a>.</p>
<h3 id="background">Background</h3>
<p>This motivation behind this paper is built on the intuition that causal interactions between political events and economic developments exist. It specifically considers the idea that increasing political tensions between the US and China will impact the world economy and the oil market. This is a reasonable concern, given that the US and the Chinese economies are the largest consumers of oil (around 20% and 16% of the world's consumption, respectively), according to the <a href="https://www.bp.com/content/dam/bp/business-sites/en/global/corporate/pdfs/energy-economics/statistical-review/bp-stats-review-2021-full-report.pdf" target="_blank" rel="noopener">BP Statistical Review of World Energy 2021.</a></p>
<h3 id="data">Data</h3>
<p>The paper focuses on data over the period spanning from January 1971 to December 2019. </p>
<p>Relying on an extended period allows them to:</p>
<ul>
<li>Highlight the growing influence of the Chinese economy on the international scene</li>
<li>Have a complete picture of the evolution of the political relationships between China and the US through time.</li>
</ul>
<h4 id="measuring-political-tensions">Measuring political tensions</h4>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/09/pri-graph-1.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/09/pri-graph-1.jpg" alt="US-China political relationship index. " width="800" height="600" class="alignnone size-full wp-image-11582934" /></a></p>
<p>In order to capture the effects of political tensions on the oil market, the newly released paper of Cai, Mignon and Saadaoui (2022) relies on a <a href="http://www.tuiir.tsinghua.edu.cn/imiren/Publications/Foreign_RelationsData.htm" target="_blank" rel="noopener">quantitative measure of political relationships developed in the Institute of International Relation of Tsinghua University.</a> </p>
<p>This index:</p>
<ul>
<li>Fluctuates between -9 and 9 according to the occurrence of “bad” or “good” political events, using a scale similar to the <a href="https://journals.sagepub.com/doi/10.1177/0022002792036002007" target="_blank" rel="noopener">Goldstein scale (Goldstein, 1992).</a></li>
<li>Shows improved relationships between US and China at the end of the 1970s and at the end of the 1990s, when positive diplomatic developments have occurred.</li>
<li>Indicates the relationship deteriorated considerably during the Tiananmen Square Event in 1989, after the bombing of the Chinese embassy in Belgrade in 1999, and during Trump’s administration.</li>
</ul>
<h4 id="measuring-the-oil-market">Measuring the oil market</h4>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/09/oil-market-1.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/09/oil-market-1.jpg" alt="Oil market supply, demand, and real price. " width="800" height="600" class="alignnone size-full wp-image-11582935" /></a></p>
<p>This paper uses three oil market indicators:</p>
<ul>
<li>Global oil supply.</li>
<li>World oil demand.</li>
<li>Real price of price, measured by the WTI spot price deflated with the US consumer price index. </li>
</ul>
<div class="alert alert-info" role="alert">All oil market data is available on <a href="https://sites.google.com/site/cjsbaumeister/datasets" target="_blank" rel="noopener">Christiane Baumeister’s website</a>.</div>
<h3 id="model">Model</h3>
<h4 id="the-svar-setup">The SVAR setup</h4>
<p>Today, we will be considering an SVAR model that examines the effects of US-China political tensions on the oil market. This model is based on the newly released paper from Cai, Mignon, and Saadaoui (2022).</p>
<p>This model considers four endogenous variables, included in logarithmic terms:</p>
<ul>
<li>US-China political tension index</li>
<li>Oil supply</li>
<li>Oil demand</li>
<li>Oil price</li>
</ul>
<h4 id="estimation">Estimation</h4>
<p>We use the <a href="https://www.aptech.com/blog/understanding-and-solving-the-structural-vector-autoregressive-identification-problem/" target="_blank" rel="noopener">recursive identification scheme</a> by using <a href="https://docs.aptech.com/gauss/chol.html" target="_blank" rel="noopener">Cholesky decomposition</a> to obtain a lower-triangular matrix. </p>
<p>The identification scheme:</p>
<ul>
<li>Reflects the hypothesis in the literature that political tensions influence contemporaneous market developments, but the reverse causality takes some time. </li>
<li>Captures our primary interest, the short-run effects of political tensions on the oil market.</li>
</ul>
<p>The baseline SVAR model:</p>
<ul>
<li>Is estimated from January 1971 to December 2019.</li>
<li>Includes 24 lags. </li>
</ul>
<h4 id="impulse-response-functions">Impulse response functions</h4>
<p>The confidence intervals for the structural IRFs are computed using MBB to address the issue of conditional heteroscedasticity. Estimation follows the methodology in Brüggemann, R., Jentsch, C., &amp; Trenkler, C. (2016).</p>
<div class="alert alert-info" role="alert">The GAUSS code for this estimation was kindly provided by <a href="https://www.wiwi.uni-konstanz.de/brueggemann/team/prof-dr-ralf-brueggemann/" target="_blank" rel="noopener">Professor Ralf Brüggemann</a> and requests for the full code must be addressed to him.</div>
<p>The impulse response functions:</p>
<ul>
<li>Have an 80-month horizon.</li>
<li>Reflect the impact of a 1% decrease in the US-China PRI measure. </li>
<li>Include 68% and 90% confidence intervals. </li>
</ul>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/09/irf-plots-2.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/09/irf-plots-2.jpg" alt="Impulse response functions measuring the impact of 1% change in U.S/China Political Relationship Index on oil market conditions." width="800" height="600" class="alignnone size-full wp-image-11582973" /></a></p>
<p>From these impulse response functions, we can see that a 1% decrease in the US-China relationships leads to:</p>
<ul>
<li>Insignificant short-run impacts. </li>
<li>A long-run increase in oil production.</li>
<li>A long-run decrease in oil demand.</li>
<li>A long-run increase in real oil prices. </li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>Today's blog looks more closely at the issue of inference in the structural VAR model. In particular, we consider the impact of conditional heteroscedasticity on inferences or structural impulse response functions. </p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">Introduction to the Fundamentals of Time Series Data and Analysis</a>  </li>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-vector-autoregressive-models/" target="_blank" rel="noopener">Introduction to the Fundamentals of Vector Autoregressive Models</a>  </li>
<li><a href="https://www.aptech.com/blog/the-intuition-behind-impulse-response-functions-and-forecast-error-variance-decomposition/" target="_blank" rel="noopener">The Intuition Behind Impulse Response Functions and Forecast Error Variance Decomposition</a>  </li>
<li><a href="https://www.aptech.com/blog/introduction-to-granger-causality/" target="_blank" rel="noopener">Introduction to Granger Causality</a>  </li>
<li><a href="https://www.aptech.com/blog/understanding-and-solving-the-structural-vector-autoregressive-identification-problem/" target="_blank" rel="noopener">Understanding and Solving the Structural Vector Autoregressive Identification Problem</a>  </li>
<li><a href="https://www.aptech.com/blog/the-structural-var-model-at-work-analyzing-monetary-policy/" target="_blank" rel="noopener">The Structural VAR Model at Work: Analyzing Monetary Policy</a> </li>
</ol>
<h3 id="references">References</h3>
<p>Brüggemann, R., Jentsch, C., &amp; Trenkler, C. (2016). Inference in VARs with conditional heteroskedasticity of unknown form. Journal of Econometrics, 191(1), 69-85. <a href="https://doi.org/10.1016/j.jeconom.2015.10.004">https://doi.org/10.1016/j.jeconom.2015.10.004</a></p>
<p>Cai, Y., Mignon, V., &amp; Saadaoui, J. (2022). Not All Political Relation Shocks are Alike: Assessing the Impacts of US-China Tensions on the Oil Market. Energy Economics, 106199. <a href="https://doi.org/10.1016/j.eneco.2022.106199">https://doi.org/10.1016/j.eneco.2022.106199</a></p>
<p>Mignon, V. (2022). Econométrie: Théorie et applications. Economica. 2ème edition.</p>
<h2 id="try-out-the-gauss-time-series-mt-library">Try Out The GAUSS Time Series MT Library</h2>

[contact-form-7]
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/addressing-conditional-heteroscedasticity-in-svar-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Unobserved Components Models; The Local Level Model</title>
		<link>https://www.aptech.com/blog/unobserved-components-models-the-local-level-model/</link>
					<comments>https://www.aptech.com/blog/unobserved-components-models-the-local-level-model/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Tue, 06 Sep 2022 04:09:42 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11582842</guid>

					<description><![CDATA[In today's blog, we explore a simple but powerful member of the unobserved components family - the local level model. This model provides a straightforward method for understanding the dynamics of time series data. 

This blog will examine:
<ul>
<li>Time series decomposition. </li>
<li>Unobserved components and the local level model.</li>
<li>Understanding the estimated results for a local level model. </li>
</ul>]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>In today's blog, we explore a simple but powerful member of the unobserved components family - the local level model. This model provides a straightforward method for understanding the dynamics of time series data. </p>
<p>This blog will examine:</p>
<ul>
<li>Time series decomposition. </li>
<li>Unobserved components and the local level model.</li>
<li>Understanding the estimated results for a local level model. </li>
</ul>
<h2 id="time-series-decomposition">Time Series Decomposition</h2>
<p>Time series decomposition is a key methodology for better understanding the dynamics of <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">time series data</a>. Put very simply, time series decomposition is the process of separating a time series into the underlying components that drive the data movements over time. </p>
<p>Time series decomposition:</p>
<ul>
<li>Allows us to better understand the movements of time series data. </li>
<li>Can be applied to improve forecast accuracy.</li>
<li>Generally involves splitting data into a seasonal component, trend/cycle components, and remainder component (Hyndman and Athanasopoulos, 2018).</li>
</ul>
<table>
 <thead>
<tr><th>Component</th><th>Description</th><th>Examples</th></tr>
</thead>
<tbody>
<tr><td>Seasonal</td><td>Occurs when time series data exhibits regular and predictable patterns at time intervals that are smaller than a year.</td><td>
<ul><li>Retail sales traditionally show higher spending around the holiday season.</li>
<li>Monthly housing sales show seasonality with higher sales in spring and summer and lower sales in fall and winter.</li> 
<li>CO<sup>2</sup> concentrations have a seasonal component that peaks in June and is at its lowest in September.</li></ul> 
</td></tr>
<tr><td>Trend</td><td>Time trends are deterministic components that are proportionate to the time period.  </td><td>
<ul>
<li>Macroeconomic price indices tend to show upward time trends.</li>
<li>Cost data for businesses often increase with time.</li>
<li>Incidence and prevalence rates of diseases and health issues trend with time.</li>
</ul> </td></tr>
<tr><td>Cycle</td><td>The cyclical component occurs when there are non-standard rises and falls in data (Hyndman and Athanasopoulos, 2018).  </td><td>The concept of business cycles in economic data is an example of a cyclical component.</td></tr>
</tbody></table>
<h2 id="classical-decomposition">Classical Decomposition</h2>
<p>Classical decomposition is a fundamental concept in time series decomposition. This basic but powerful model is used to decompose a time series such that</p>
<p>$$y_t = \mu_t + \gamma_t + c_t + \epsilon_t$$</p>
<p>where</p>
<p>$$\begin{align} y_t &= \text{observation} \nonumber \\ \mu_t &= \text{slowly changing component (trend)} \nonumber \\ \gamma_t &= \text{periodic component (seasonal)} \nonumber \\ c_t &= \text{cyclical component } \nonumber \\ \epsilon_t &=\text{irregular component (disturbamce)} \nonumber \\ \end{align}$$</p>
<table>
 <thead>
<tr><th>Advantages of Classical Decomposition</th><th>Disadvantages of Classical Decomposition (Hyndman and Athanasopoulos, 2018)</th></tr>
</thead>
<tbody>
<tr><td>
<ul><li>Simple to implement.</li>
    <li>Foundation of other decomposition methods.</li>
    <li>Clear interpretation.</li>
</ul></td>
<td>
<ul><li>Cannot capture irregularities that may occur over short periods (i.e. structural breaks).</li>
    <li>Assume that the seasonal dynamics are the same over time.</li>
    <li>Does a poor job capturing rapid rises and falls in data. </li>
</ul>
</td></tr>
</tbody></table>
<h2 id="the-local-level-model">The Local Level Model</h2>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/09/nile-local-level-1.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/09/nile-local-level-1.jpg" alt="The level and trend component of the local level model. " width="600" height="400" class="aligncenter size-full wp-image-11582873" /></a></p>
<p>The local level model is a linear regression model that models the unobserved stochastic trend and irregular component such that:</p>
<p>$$\begin{align} y_t &= \mu_t + \epsilon_t, \epsilon_t \sim NID(0, \sigma_{\epsilon}^2) \nonumber \\ \mu_{t+1} &= \mu_t + \eta_t, \eta_t \sim NID(0, \sigma_{\eta}^2) \nonumber \\ \end{align} $$</p>
<p>There are a few noteworthy aspects of this model:</p>
<ul>
<li>The components $\mu_t$ and $y_t$ are <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/" target="_blank" rel="noopener">nonstationary</a>. Put in other terms, this is a time series model that is appropriate for nonstationary data. </li>
<li>The unknown parameters in this model are $\sigma_{\epsilon}^2$ and $\sigma_{\eta}^2$.</li>
<li>The unobserved component included in this model is the stochastic trend, $\mu_t$.</li>
<li>If $\sigma_{\epsilon}^2$ goes to zero the model becomes a random walk. </li>
<li>If $\sigma_{\eta}^2$ goes to zero the model becomes white noise with a constant mean. </li>
</ul>
<h3 id="estimating-the-local-level-model">Estimating the Local Level Model</h3>
<p>Because the local level model contains an unobserved component, it fits nicely into the <a href="https://www.aptech.com/blog/understanding-state-space-models-an-inflation-example/" target="_blank" rel="noopener">state-space framework</a>. Both the unobserved component and the unknown parameters can be estimated using the <a href="https://www.aptech.com/resources/tutorials/tsmt/filtering-data-with-the-kalman-filter/" target="_blank" rel="noopener">Kalman filter</a> and <a href="https://www.aptech.com/blog/beginners-guide-to-maximum-likelihood-estimation-in-gauss/" target="_blank" rel="noopener">maximum likelihood estimation</a>.</p>
<table>
 <thead>
<tr><th colspan="2">Local level model state-space representation</th></tr>
</thead>
<tbody>
<tr><td>Measurement Equation</td><td>$y_t = \mu_t + \epsilon_t$</td></tr>
<tr><td>Transition Equation</td><td>$\mu_{t+1} = \mu_t + \eta_t$</td></tr>
</tbody></table>
<p>where</p>
<table>
 <thead>
<tr><th>Object</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>$d$</td><td>0</td></tr>
<tr><td>$Z$</td><td>1</td></tr>
<tr><td>$H$</td><td>$\sigma_{\epsilon}^2$</td></tr>
<tr><td>$c$</td><td>0</td></tr>
<tr><td>$T$</td><td>1</td></tr>
<tr><td>$R$</td><td>1</td></tr>
<tr><td>$Q$</td><td>$\sigma_{\eta}^2$</td></tr>
</tbody></table>
<div class="alert alert-info" role="alert">For more information on specifying and estimating state-space models like this one, see our blog,  <a href="https://www.aptech.com/blog/understanding-state-space-models-an-inflation-example/" target="_blank" rel="noopener">Understanding State-Space Models (An Inflation Example)</a>.</div>
<h2 id="data-application-australian-quarterly-inflation">Data Application: Australian Quarterly Inflation</h2>
<p>As an example application, let’s use this model to decompose Australian CPI quarterly inflation into a trend component and a disturbance component. We'll then look at the performance of this model using residual diagnostics. </p>
<h3 id="data">Data</h3>
<p>Today we will examine quarterly inflation in Australia using data from the  <a href="https://www.abs.gov.au/statistics/economy/price-indexes-and-inflation/consumer-price-index-australia/latest-release" target="_blank" rel="noopener">Australian Bureau of Statistics</a>.</p>
<table>
 <thead>
</thead>
<tbody>
<tr><th>Data source</th><td>Australian Bureau of Statistics</td></tr>
<tr><th>Full sample range</th><td>1948-Q4 to 2022-Q2</td></tr>
<tr><th>Estimation period</th><td>1948-Q4 to 1999-Q4</td></tr>
<tr><th>Forecasting period</th><td>2020-Q1 to 2022-Q2</td></tr>
<tr><th>Series name</th><td>Percentage Change from Previous Period;All groups CPI;Australia;        
</td></tr>
<tr><th>Series ID</th><td>A2325850V</td></tr>
</tbody>
</table>
<h3 id="results">Results</h3>
<p>The maximum likelihood estimates for the local level model are</p>
<table>
 <thead>
<tr><th>Parameter</th><th>Estimate</th><th>Probability Value</th></tr>
</thead>
<tbody>
<tr><td>$\sigma_{\epsilon}^2$</td><td>0.6383</td><td>0.0000</td></tr>
<tr><td>$\sigma_{\eta}^2$</td><td>0.1241</td><td>0.0010</td></tr>
</tbody></table>
<p>These estimates show that both $\sigma_{\epsilon}^2$ and $\sigma_{\eta}^2$ are statistically different from zero with p-values of 0.0000 and 0.0004, respectively. </p>
<p>The local level model also produces, via the Kalman filter, an estimate of the unobserved trend component of inflation. When plotted with the observed inflation rate, we can see that the unobserved component appears to be a smoothed version of the observed series. </p>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/09/observed-smoothed-inflation.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/09/observed-smoothed-inflation.jpg" alt="Australian CPI quarterly inflation local level model. " width="600" height="400" class="aligncenter size-full wp-image-11582875" /></a></p>
<p>Once the trend is removed, we are left with the disturbance component. The <a href="https://www.aptech.com/resources/tutorials/time-series-plots/" target="_blank" rel="noopener">time series plot</a> of the disturbances suggests that there may still be some regularity in the series that we haven’t accounted for. </p>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/09/disturbances-2.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/09/disturbances-2.jpg" alt="Australian CPI quarterly inflation local level model disturbances. " width="600" height="400" class="aligncenter size-full wp-image-11582876" /></a></p>
<h3 id="residual-diagnostics">Residual Diagnostics</h3>
<p>To gain a better idea of the quality of our model, we will run some diagnostic tests on the disturbances. </p>
<table>
 <thead>
<tr><th>Test</th><th>Statistic</th><th>Probability Value</th><th>Conclusion</th></tr>
</thead>
<tbody>
<tr><td><a href="https://docs.aptech.com/gauss/sslib/ssljungbox.html" target="_blank" rel="noopener">Ljung-Box test for autocorrelation.</a></td><td>1.7</td><td>0.192</td><td>Fail to reject the null hypothesis that the residuals are independently distributed.</td></tr>
<tr><td><a href="https://docs.aptech.com/gauss/sslib/ssheteroskedasticitytest.html" target="_blank" rel="noopener">Test for heteroskedasticity.</a></td><td>0.289</td><td>0.000</td><td>Reject the null hypothesis of no heteroskedasticity at the 1% level.</td></tr>
<tr><td><a href="https://docs.aptech.com/gauss/sslib/ssjarquebera.html" target="_blank" rel="noopener">Jarque-Bera goodness-of-fit test.</a></td><td>90.2</td><td>0.000</td><td>Reject the null hypothesis that the data is normally distributed at the 1% level.</td></tr>
</tbody></table>
<p>These results support what we saw in the visualizations of the disturbances - there are remaining patterns in the disturbances that are not accounted for in our model.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Today we've learned more about the simple but powerful local level model. This local level model is a fundamental model that allows us to decompose time series data into unobserved components. This, in turn, helps us better understand the dynamics of our data.  </p>
<p>    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script>
</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/unobserved-components-models-the-local-level-model/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Understanding State-Space Models (An Inflation Example)</title>
		<link>https://www.aptech.com/blog/understanding-state-space-models-an-inflation-example/</link>
					<comments>https://www.aptech.com/blog/understanding-state-space-models-an-inflation-example/#comments</comments>
		
		<dc:creator><![CDATA[Eric]]></dc:creator>
		<pubDate>Thu, 04 Aug 2022 05:43:04 +0000</pubDate>
				<category><![CDATA[Econometrics]]></category>
		<category><![CDATA[Time Series]]></category>
		<guid isPermaLink="false">https://www.aptech.com/?p=11582608</guid>

					<description><![CDATA[State-space models provide a powerful environment for modeling dynamic systems. Their flexibility has resulted in a wide variety of applications across fields including radar tracking, 3-D modeling, monetary policy modeling, weather forecasting, and more. 

In this blog, we look more closely at state-space modeling using a simple time series model of inflation. 

We cover:

<ul>
<li>The components of state-space models. </li>
<li>Representing state-space models in GAUSS.</li>
<li>Estimating model parameters using state-space models. </li>
</ul>]]></description>
										<content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>State-space models provide a powerful environment for modeling dynamic systems. Their flexibility has resulted in a wide variety of applications across fields including radar tracking, 3-D modeling, monetary policy modeling, weather forecasting, and more. </p>
<p>In this blog, we look more closely at state-space modeling using a simple time series model of inflation. </p>
<p>We will cover:</p>
<ul>
<li>The components of state-space models. </li>
<li>Representing state-space models in GAUSS.</li>
<li>Estimating model parameters using state-space models. </li>
</ul>
<h2 id="inflation-modeling">Inflation Modeling</h2>
<p><a href="https://www.aptech.com/wp-content/uploads/2022/08/inflation-cpi-1.jpg"><img src="https://www.aptech.com/wp-content/uploads/2022/08/inflation-cpi-1.jpg" alt="Plot of CPI annual inflation rate. " width="800" height="600" class="alignnone size-full wp-image-11582740" /></a></p>
<p>Today we will use the <a href="https://docs.aptech.com/gauss/sslib/overview.html" target="_blank" rel="noopener">state-space framework</a> to build a simple, albeit naive, model of inflation. This model uses a CPI inflation index, created from the <a href="https://fred.stlouisfed.org/series/CPIAUCNS#0" target="_blank" rel="noopener">FRED CPIAUCNS average quarterly dataset</a>. Our data ranges from 1971q1 to 2021q4.</p>
<div class="alert alert-info" role="alert">Note: This model can be estimated using the prebuilt <a href="https://docs.aptech.com/gauss/sslib/ssarima.html" target="_blank" rel="noopener">ssARIMA</a> procedure. However, for demonstration purposes, this blog builds the state-space representation and estimates the model using the <a href="https://docs.aptech.com/gauss/sslib/ssfit.html" target="_blank" rel="noopener">ssFit</a> procedure.</div>
<h2 id="state-space-model-specification">State-Space Model Specification</h2>
<p>State-space systems specify the dynamics and relationship of two components:</p>
<ul>
<li>Observed data.</li>
<li>Unobserved state variable. </li>
</ul>
<p>These components are modeled using a <strong>measurement equation</strong> for the observed data,</p>
<p>$$ y_t = d + Z\alpha_t + \epsilon_t $$</p>
<p>and a <strong>transition equation</strong> for the unobserved state variable,</p>
<p>$$ \alpha_{t+1} = c + T\alpha_t + R\eta_t $$</p>
<p>where</p>
<p>$$ \epsilon_t  \sim N(0, H) $$
$$ \eta_t  \sim N(0, Q) $$</p>
<p>and</p>
<table>
 <thead>
<tr><th>Object</th><th>Description</th><th>Dimension</th></tr>
</thead>
<tbody>
<tr><td>$y_t$</td><td>Observed data.</td><td>$p \times 1$</td></tr>
<tr><td>$\alpha_t$</td><td>Unobserved data.</td><td>$m \times 1$</td></tr>
<tr><td>$d$</td><td>Observation intercept.</td><td>$m \times 1$</td></tr>
<tr><td>$Z$</td><td>Design matrix.</td><td>$p \times m$</td></tr>
<tr><td>$H$</td><td>Observation disturbance covariance matrix.</td><td>$p \times 1$</td></tr>
<tr><td>$c$</td><td>State intercept.</td><td>$m \times 1$</td></tr>
<tr><td>$T$</td><td>Transition matrix.</td><td>$m \times m$</td></tr>
<tr><td>$R$</td><td>Selection matrix.</td><td>$m \times r$</td></tr>
<tr><td>$\eta_t$</td><td>State disturbance.</td><td>$r \times 1$</td></tr>
<tr><td>$Q$</td><td>State disturbance covariance matrix.</td><td>$r \times r$</td></tr>
</tbody></table>
<p>While this representation may seem simplistic, the framework offers a very flexible platform for modeling. It is used widely in <a href="https://www.aptech.com/blog/getting-started-with-time-series-in-gauss" target="_blank" rel="noopener">time series analysis</a> including <a href="https://docs.aptech.com/gauss/sslib/ssarima.html" target="_blank" rel="noopener">ARIMA</a>, <a href="https://docs.aptech.com/gauss/sslib/sssarima.html" target="_blank" rel="noopener">SARIMA</a>, <a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-vector-autoregressive-models/" target="_blank" rel="noopener">vector autoregressive models</a>, unobserved components, and dynamic factor models.</p>
<h3 id="example-representation-ar2-model">Example Representation: AR(2) Model</h3>
<p>As an example consider an $AR(2)$ model such that</p>
<p>$$ y_t = \phi_1 y_{t-1} + \phi_2 y_{t-2} + e_t $$<br />
$$ e_t \sim N(0, \sigma^2) $$</p>
<p>Let</p>
<p>$$ \alpha_t = (y_t, y_{t-1})' $$</p>
<p>Then we can write our $AR(2)$ model such that</p>
<table>
 <thead>
<tr><th colspan="2">AR(2) State-space Representation</th></tr>
</thead>
<tbody>
<tr><td>Measurement Equation</td><td>$y_t = \begin{bmatrix} 1 &amp; 0 \end{bmatrix} \alpha_t$</td></tr>
<tr><td>Transition Equation</td><td>$\alpha_t   = \begin{bmatrix} \phi_1 &amp; \phi_2\\ 1 &amp; 0\end{bmatrix} \alpha_{t-1}  + \begin{bmatrix} 1\\ 0 \end{bmatrix} \eta_t$</td></tr>
</tbody></table>
<p>where</p>
<table>
 <thead>
<tr><th>Object</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>$d$</td><td>0</td></tr>
<tr><td>$Z$</td><td>$\begin{bmatrix} 1 &amp; 0 \end{bmatrix}$</td></tr>
<tr><td>$H$</td><td>0</td></tr>
<tr><td>$c$</td><td>0</td></tr>
<tr><td>$T$</td><td>$\begin{bmatrix} \phi_1 &amp; \phi_2\\ 1 &amp; 0 \end{bmatrix}$</td></tr>
<tr><td>$R$</td><td>$\begin{bmatrix} 1 \\ 0 \end{bmatrix}$</td></tr>
<tr><td>$Q$</td><td>$\sigma^2$</td></tr>
</tbody></table>
<div class="alert alert-info" role="alert">For a detailed derivation of this representation see our previous blog, <a href="https://www.aptech.com/resources/tutorials/tsmt/filtering-data-with-the-kalman-filter/" target="_blank" rel="noopener">Filtering Data With the Kalman Filter</a>.</div>
<h3 id="example-representation-arma11">Example Representation: ARMA(1,1)</h3>
<p>The $ARMA(1,1)$ model is given by </p>
<p>$$ y_t = \phi y_{t-1} + \epsilon_t + \theta \epsilon_{t-1} $$
$$ \epsilon_t \sim N(0, \sigma^2) $$</p>
<p>If we let $ \alpha_t = (y_t, \theta \epsilon_t)$ then this can be represented in state-space form such that</p>
<table>
 <thead>
<tr><th colspan="2">ARMA(1,1) State-space Representation</th></tr>
</thead>
<tbody>
<tr><td>Measurement Equation</td><td>$y_t = \begin{bmatrix} 1 &amp; 0 \end{bmatrix} \alpha_t$</td></tr>
<tr><td>Transition Equation</td><td>$\alpha_t = \begin{bmatrix} \phi &amp; 1\\ 0 &amp; 0\end{bmatrix} \alpha_{t-1}  + \begin{bmatrix} 1\\ \theta \end{bmatrix} \epsilon_t$</td></tr>
</tbody></table>
<p>where</p>
<table>
 <thead>
<tr><th>Object</th><th>Description</th></tr>
</thead>
<tbody>
<tr><td>$d$</td><td>0</td></tr>
<tr><td>$Z$</td><td>$\begin{bmatrix} 1 &amp; 0 \end{bmatrix}$</td></tr>
<tr><td>$H$</td><td>0</td></tr>
<tr><td>$c$</td><td>0</td></tr>
<tr><td>$T$</td><td>$\begin{bmatrix} \phi &amp; 1\\ 0 &amp; 0\end{bmatrix}$</td></tr>
<tr><td>$R$</td><td>$\begin{bmatrix} 1\\ \theta \end{bmatrix}$</td></tr>
<tr><td>$Q$</td><td>$\sigma^2$</td></tr>
</tbody></table>
<h2 id="estimation-of-state-space-models">Estimation of State-Space Models</h2>
<p>In most cases, the parameters of our state-space representation are unknown and we wish to estimate them. However, this requires not only estimating the parameters but also estimating the unobserved state variable. Fortunately, we can do both using a combination of the <a href="https://docs.aptech.com/gauss/tsmt/kalmanfilter.html" target="_blank" rel="noopener">Kalman filter</a> and <a href="https://www.aptech.com/blog/beginners-guide-to-maximum-likelihood-estimation-in-gauss/" target="_blank" rel="noopener">maximum likelihood estimation</a>. </p>
<table>
 <thead>
<tr><th colspan="2">Estimating state-space models</th></tr>
</thead>
<tbody>
<tr><td>Kalman filter</td><td> The Kalman filter uses recursive iteration to estimate the unknown state.</td></tr>
<tr><td>Maximum likelihood estimation</td><td>Uses the likelihood function generated from the Kalman filter to estimate the unknown parameters.</td></tr>
</tbody></table>
<p>Let's look at estimating our $ARMA(1,1)$ model using the new <code>sslib</code> library. </p>
<h3 id="the-sslib-library">The sslib Library</h3>
<p>The <code>sslib</code> library is a GAUSS application module for setting up and estimating state-space models.
The state-space library requires:</p>
<ol>
<li>A working copy of <a href="https://www.aptech.com/blog/gauss22/" target="_blank" rel="noopener"><strong>GAUSS 22+</strong></a>.</li>
<li>The <a href="https://store.aptech.com/gauss-applications-category/constrained-maximum-likelihood-mt.html" target="_blank" rel="noopener">Constrained Maximum Likelihood MT library</a> for GAUSS.</li>
<li>The <a href="https://store.aptech.com/gauss-applications-category/time-series-mt.html" target="_blank" rel="noopener">Time Series MT library</a> for GAUSS.</li>
</ol>
<h3 id="loading-and-transforming-data">Loading and Transforming Data</h3>
<p>To begin our state-space inflation models we need to:</p>
<ul>
<li>Load the required libraries:<code>cmlmt</code>, <code>tsmt</code>, and <code>sslib</code>.</li>
<li>Load the CPI data.</li>
<li>Calculate the inflation rate. </li>
</ul>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">new;

/*
** Load required libraries
*/
library cmlmt, tsmt, sslib;

/*
** Perform import
*/
cpi_data = loadd(__FILE_DIR $+ "cpi_fred_q_ext.csv", "date($date) + cpi");

// Filter to exclude 2022 data
cpi_data = selif(cpi_data, cpi_data[., "date"] .&lt; "2022");

/*
** Calculate log difference 
** annual inflation rate
*/

// Compute log difference rate
y = tsdiff(ln(cpi_data[., "cpi"]), 1);

// Convert from average quarterly decimal
// to annual percentage 
y = 4*y*100;

// Rename y variable
y = dfname(y, "Inflation", "cpi");</code></pre>
<h3 id="setting-up-the-parameter-vector-and-start-values">Setting Up the Parameter Vector and Start Values</h3>
<p>Our inflation model has three unknown parameters that we will estimate, $\phi$, $\theta$, and $\sigma^2$. This means our initial parameter vector will be a $3 \times 1$ column vector of starting values. </p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Step two: Set up parameter vector 
**           and start values
*/
param_vec_st = asDF(zeros(3, 1), "param");

param_vec_st[1] = 0;
param_vec_st[2] = 0;
param_vec_st[3] = 1;</code></pre>
<div class="alert alert-info" role="alert">All custom state-space models require an initial parameter column vector. Some models are more sensitive than others to initial values. If your model is having trouble converging, try adjusting the starting values.</div>
<h3 id="initializing-the-control-structure-and-system-matrices">Initializing the Control Structure and System Matrices</h3>
<p>The <code>sslib</code> uses a <code>ssControl</code> structure to:</p>
<ol>
<li>Specify the state-space system matrices.</li>
<li>Implement <a href="https://www.aptech.com/blog/how-to-conduct-unit-root-tests-in-gauss/" target="_blank" rel="noopener">stationarity</a> and non-negativity constraints on parameters.</li>
<li>Control modeling features.</li>
<li>Specify <a href="https://www.aptech.com/blog/maximum-likelihood-estimation-in-gauss/" target="_blank" rel="noopener">advanced maximum likelihood</a> controls.</li>
</ol>
<p>The first step to initializing our model is specifying our model dimensions. These values will be used to initialize the system matrices. </p>
<p>The model dimensions to be specified include:</p>
<table>
 <thead>
<tr><th>Parameter</th><th>Description</th><th>$ARMA(1,1)$ Model</th></tr>
</thead>
<tbody>
<tr><td><code>k_endog</code></td><td>Number of endogenous variables.</td><td>1</td></tr>
<tr><td><code>k_states</code></td><td>Number of state variables.  </td><td>2</td></tr>
<tr><td><code>k_posdef</code></td><td>Optional, dimension of the state innovation with positive definite covariance matrix. Default = k_states.</td><td>2</td></tr>
</tbody></table>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Step three: Set up control structure
**             and model matrices.
*/
// Number of endogenous variables
k_endog = 1;

// Number of states
k_states = 2;</code></pre>
<p>Next we initialize our <code>ssControl</code> structure and set the default system matrices using <a href="https://docs.aptech.com/gauss/sslib/sscontrolcreate.html" target="_blank" rel="noopener">ssControlCreate</a>:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Declare and instance of control structure
struct ssControl ssCtl;
ssCtl = ssControlCreate(k_states, k_endog);

// Specify parameter names for output printing
ssCtl.param_names = "phi"$|"theta"$|"sigma2";</code></pre>
<h3 id="specifying-the-state-space-representation">Specifying the State-Space Representation</h3>
<p>After initializing the state-space system matrices, we are ready to specify our state-space representation. This is done in two separate steps:</p>
<ol>
<li>Specify the fixed system matrices that do not contain parameters. </li>
<li>Specify the matrices which do contain parameters using a custom <code>updateSSModel</code> function.</li>
</ol>
<p>The system matrices are stored in the initialize <code>ssControl</code> in the <code>ssModel</code> structure named <code>ssm</code>. </p>
<table>
 <thead>
<tr><th>Object</th><th>Description</th><th>Dimensions</th></tr>
</thead>
<tbody>
<tr><td><code>ssm.d</code></td><td>Observation intercept.</td><td> $k_{endog} \times k_{states}$</td></tr>
<tr><td><code>ssm.Z</code></td><td>Transition matrix.</td><td> $k_{endog} \times 1$</td></tr>
<tr><td><code>ssm.H</code></td><td>Observation disturbance covariance.</td><td> $k_{endog} \times k_{endog}$</td></tr>
<tr><td><code>ssm.c</code></td><td>State intercept.</td><td> $k_{states} \times 1$</td></tr>
<tr><td><code>ssm.T</code></td><td>Design matrix.</td><td> $k_{states} \times k_{states}$</td></tr>
<tr><td><code>ssm.R</code></td><td>Selection matrix.</td><td> $k_{states} \times k_{posdef}$</td></tr>
<tr><td><code>ssm.Q</code></td><td>State disturbance covariance.</td><td> $k_{states} \times k_{posdef}$</td></tr>
<tr><td><code>ssm.a_0</code></td><td>Initial prior state mean.</td><td> $k_{states} \times 1$</td></tr>
<tr><td><code>ssm.p_0</code></td><td>Initial prior state covariance.</td><td>$k_{states} \times k_{states}$</td></tr>
</tbody></table>
<h4 id="specifying-fixed-system-matrices">Specifying Fixed System Matrices</h4>
<p>To specify the fixed system matrices we use standard matrix assignment:</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">// Set fixed parameters of model
ssctl.ssm.Z = { 1  0 };
ssctl.ssm.H = 0;</code></pre>
<h4 id="specifying-system-matrices-with-parameters">Specifying System Matrices With Parameters</h4>
<p>To specify the relationship between the model parameters and the state-space system matrices, we use an <code>updateSSModel</code> procedure. This procedure is then passed to the <code>ssFit</code> procedure for estimation. </p>
<p>The <code>updateSSModel</code> procedure should always contain two inputs:</p>
<table>
 <thead>
<tr><th>Object</th><th>Specification</th></tr>
</thead>
<tbody>
<tr><td><code>*ssmod</code></td><td>A pointer to the <code>ssmod</code> structure. </td></tr>
<tr><td><code>param</code></td><td>The parameter vector. </td></tr>
</tbody></table>
<p>Since this structure uses a pointer to the <code>*ssmod</code> structure, we use the arrow notation, <code>-&gt;</code>, for assigning values to members within the structures.</p>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss"> /*
** Set up procedure for updating SS model 
** structure.
**
*/
proc (0) = updateSSModel(struct ssModel *ssmod, param);

    // Set up kalman filter matrices
    ssmod-&gt;R = 1|param[2];
    ssmod-&gt;T = (param[1]~1)|(0~0);
    ssmod-&gt;Q = param[3];

endp;</code></pre>
<h3 id="parameter-constraints">Parameter constraints</h3>
<p>The final step before estimation is to specify our parameter constraints. For our $ARMA(1,1)$ model we need to:</p>
<ol>
<li>Constrain $\phi$ and $\theta$ to be stationary/invertible using the <code>stationary_vars</code> member of the <code>ssControl</code> structure.</li>
<li>Constrain $\sigma^2$ to be non-negative using the <code>positive_vars</code> member of the <code>ssControl</code> structure.</li>
</ol>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Constrained variables
*/

/* 
** This stationary_vars member
** indicates which variables should be 
** constrained to stationarity.
*/
// Set the first and second parameters in
// the parameter vector to be stationary
ssCtl.stationary_vars = 1|2;

/* 
** This positive_vars member
** indicates which variables should be 
** constrained to be positive.
*/
// Set the third parameter in
// the parameter vector to be positive
ssCtl.positive_vars = 3;</code></pre>
<h3 id="estimation">Estimation</h3>
<p>Once the model is specified and the constraints are set, the parameters are estimated using the <code>ssFit</code> procedure. This procedure requires four inputs:</p>
<hr>
<dl>
<dt>&amp;updateSSModel</dt>
<dd>A pointer to a procedure that updates the state-space system matrices with the parameters.</dd>
<dt>param_vec_st</dt>
<dd>Vector, starting parameter values.</dd>
<dt>y</dt>
<dd>Vector, the response data.</dd>
<dt>ssCtl</dt>
<dd>Structure, an instance of the <code>ssControl</code> structure used to control features of the state-space model, Kalman filter, and maximum likelihood estimation.</dd>
</dl>
<hr>
<pre class="hljs-container hljs-container-solo"><code class="lang-gauss">/*
** Step six: Call the ssFit procedure.
**            This will:
**              1. Estimate model parameters.
**              2. Estimate inference statistics (se, t-stats).
**              3. Perform model residual diagnostics.
**              4. Compute model diagnostics and summary statistics.
*/
struct ssOut sOut;
sOut = ssFit(&amp;updateSSModel, param_vec_st, y, ssCtl); </code></pre>
<p>The <code>ssFit</code> procedure stores results to an <code>ssOut</code> structure and prints estimates, inference statistics, and model diagnostics to screen:</p>
<pre>Return Code:                                                             0
Log-likelihood:                                                     -492.4
Number of Cases:                                                       202
AIC:                                                                 990.7
AICC:                                                                990.8
BIC:                                                                  1001
HQIC:                                                                989.7
Covariance Method:                                    ML covariance matrix
==========================================================================

  Parameters   Estimates   Std. Err.      T-stat       Prob.    Gradient
--------------------------------------------------------------------------
         phi      0.9810      0.0160     61.2187      0.0000     -0.0026
       theta     -0.6266      0.0682     -9.1923      0.0000      0.0005
      sigma2      6.5535      0.6537     10.0246      0.0000      0.0000

Wald 95% Confidence Limits
--------------------------------------------------------------------------
  Parameters   Estimates Lower Limit Upper Limit    Gradient
--------------------------------------------------------------------------
         phi      0.9810     -0.8449     -0.6672     -0.0026
       theta     -0.6266      0.5200      1.0881      0.0005
      sigma2      6.5535      2.3061      2.8138      0.0000

Model and residual diagnostics:
==========================================================================

Ljung-Box (Q):                                                        4.12
Prob(Q):                                                            0.0425
Heteroskedasticity (H):                                               1.91
Prob(H):                                                           0.00885
Jarque-Bera (JB):                                                      616
Prob(JB):                                                        1.62e-134
Skew:                                                                 -1.3
Kurtosis:                                                             11.2
==========================================================================</pre>
<h3 id="conclusion">Conclusion</h3>
<p>State-space models are powerful and can be used to tackle a wide variety of modeling problems. With a little practice, state-space models can expand your modeling universe substantially.</p>
<p>This blog uses a simple $ARMA(1,1)$ model of inflation to give you a foundation for getting started with state-space models using the GAUSS <code>sslib</code>. </p>
<h3 id="further-reading">Further Reading</h3>
<ol>
<li><a href="https://www.aptech.com/resources/tutorials/tsmt/filtering-data-with-the-kalman-filter/" target="_blank" rel="noopener">Filtering Data With the Kalman Filter</a></li>
<li><a href="https://www.aptech.com/blog/beginners-guide-to-maximum-likelihood-estimation-in-gauss/" target="_blank" rel="noopener">Beginner's Guide to Maximum Likelihood Estimation</a></li>
<li><a href="https://www.aptech.com/blog/maximum-likelihood-estimation-in-gauss/" target="_blank" rel="noopener">Maximum Likelihood Estimation in GAUSS</a></li>
<li><a href="https://www.aptech.com/blog/introduction-to-the-fundamentals-of-time-series-data-and-analysis/" target="_blank" rel="noopener">Introduction to the Fundamentals of Time Series Data and Analysis</a></li>
<li><a href="https://www.aptech.com/blog/getting-started-with-time-series-in-gauss/" target="_blank" rel="noopener">Getting Started With Time Series in GAUSS</a></li>
</ol>
<p>    <!-- MathJax configuration -->
    <style>
        .mjx-svg-href {
            fill: "inherit" !important;
            stroke: "inherit" !important;
        }
    </style>
    <script type="text/x-mathjax-config">
        MathJax.Hub.Config({ TeX: { equationNumbers: {autoNumber: "AMS"} } });
    </script>
    <script type="text/javascript">
window.MathJax = {
  tex2jax: {
    inlineMath: [ ['$','$'] ],
    displayMath: [ ['$$','$$'] ],
    processEscapes: true,
    processEnvironments: true
  },
  // Center justify equations in code and markdown cells. Elsewhere
  // we use CSS to left justify single line equations in code cells.
  displayAlign: 'center',
  "HTML-CSS": {
    styles: {'.MathJax_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  "SVG": {
    styles: {'.MathJax_SVG_Display': {"margin": 0}},
    linebreaks: { automatic: false }
  },
  showProcessingMessages: false,
  messageStyle: "none",
  menuSettings: { zoom: "Click" },
  AuthorInit: function() {
    MathJax.Hub.Register.StartupHook("End", function() {
            var timeout = false, // holder for timeout id
            delay = 250; // delay after event is "complete" to run callback
            var shrinkMath = function() {
              //var dispFormulas = document.getElementsByClassName("formula");
              var dispFormulas = document.getElementsByClassName("MathJax_SVG_Display");
              if (dispFormulas){
                // caculate relative size of indentation
                var contentTest = document.getElementsByTagName("body")[0];
                var nodesWidth = contentTest.offsetWidth;
                // if you have indentation
                var mathIndent = MathJax.Hub.config.displayIndent; //assuming px's
                var mathIndentValue = mathIndent.substring(0,mathIndent.length - 2);
                for (var i=0; i<dispFormulas.length; i++){
                  var dispFormula = dispFormulas[i];
                  var wrapper = dispFormula;
                  //var wrapper = dispFormula.getElementsByClassName("MathJax_Preview")[0].nextSibling;
                  var child = wrapper.firstChild;
                  wrapper.style.transformOrigin = "center"; //or top-left if you left-align your equations
                  var oldScale = child.style.transform;
                  //var newValue = Math.min(0.80*dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newValue = Math.min(dispFormula.offsetWidth / child.offsetWidth,1.0).toFixed(2);
                  var newScale = "scale(" + newValue + ")";
                  if(newValue != "NaN" && !(newScale === oldScale)){
                    wrapper.style.transform = newScale;
                    wrapper.style["margin-left"]= Math.pow(newValue,4)*mathIndentValue + "px";
                    var wrapperStyle = window.getComputedStyle(wrapper);
                    var wrapperHeight = parseFloat(wrapperStyle.height);
                    wrapper.style.height = "" + (wrapperHeight * newValue) + "px";
                    if(newValue === "1.00"){
                      wrapper.style.cursor = "";
                      wrapper.style.height = "";
                    }
                    else {
                      wrapper.style.cursor = "zoom-in";
                    }
                  }

                }
            }
            };
            shrinkMath();
            window.addEventListener('resize', function() {
              clearTimeout(timeout);
              timeout = setTimeout(shrinkMath, delay);
            });
          });
  }
}
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-AMS_SVG"></script>
</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aptech.com/blog/understanding-state-space-models-an-inflation-example/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
			</item>
	</channel>
</rss>
