In structural vector autoregressive (SVAR) modeling, one of the core challenges is identifying the structural shocks that drive the system’s dynamics.
Traditional identification approaches often rely on short-run or long-run restrictions, which require strong theoretical assumptions about contemporaneous relationships or long-term behavior.
Sign restriction identification provides greater flexibility by allowing economists to specify only the direction, positive, negative, or neutral, of variable responses to shocks, based on theory.
In this blog, we’ll show you how to implement sign restriction identification using the new GAUSS procedure, **svarFit**, introduced in TSMT 4.0.
Structural Vector Autoregressive (SVAR) models provide a structured approach to modeling dynamics and understanding the relationships between multiple time series variables. Their ability to capture complex interactions among multiple endogenous variables makes SVAR models fundamental tools in economics and finance. However, traditional software for estimating SVAR models has often been complicated, making analysis difficult to perform and interpret. In today’s blog, we present a step-by-step guide to using the new GAUSS procedure, svarFit, introduced in TSMT 4.0. We will cover: Estimating reduced form models. Structural identification using short-run restrictions. Structural identification using long-run restrictions. Structural identification using sign restrictions.
The Constrained Maximum Likelihood (CML) library was one of the original constrained optimization tools in GAUSS. Like many GAUSS libraries, it was later updated to an “MT” version.
The “MT” version libraries, named for their use of multi-threading, provide significant performance improvements, greater flexibility, and a more intuitive parameter-handling system.
This blog post explores:
The key features, differences, and benefits of upgrading from CML to CMLMT.
A practical example to help you transition code from CML to CMLMT.
Categorical data plays a key role in data analysis, offering a structured way to capture qualitative relationships. Before running any models, simply examining the distribution of categorical data can provide valuable insights into underlying patterns.
In GAUSS 25, these functions received significant enhancements, making them more powerful and user-friendly. In this post, we’ll explore these improvements and demonstrate their practical applications.
Whether summarizing survey responses or exploring demographic trends, fundamental statistical tools, such as frequency counts and tabulations, help reveal these patterns.
If you’re an applied researcher, odds are (no pun intended) you’ve used hypothesis testing. Hypothesis testing is an essential part of practical applications, from validating economic models, to assessing policy impacts, to making informed business and financial decisions.
The usefulness of hypothesis is its ability to provide a structured framework for making objective decisions based on data rather than intuition or anecdotal evidence. It provides us a data-driven method to check the validity of our assumptions and models. The intuition is simple — by formulating null and alternative hypotheses, we can determine whether observed relationships between variables are statistically significant or simply due to chance.
In today’s blog we’ll look more closely at the statistical intuition of hypothesis testing using the Wald Test and provide a step-by-step guide for implementing hypothesis testing in GAUSS.
In this video, you’ll learn the basics of panel data analysis in GAUSS. We demonstrate panel data modeling start to finish, from loading data to running a group specific intercept model.
In this video, you’ll learn the basics of choice data analysis in GAUSS. Our video demonstration shows just how quick and easy it is to get started with everything from data loading to discrete data modeling.
Data analysis in reality is rarely as clean and tidy as it is presented in the textbooks. Consider linear regression — data rarely meets the stringent assumptions required for OLS. Failing to recognize this and incorrectly implementing OLS can lead to embarrassing, inaccurate conclusions.
In today’s blog, we’ll look at how to use feasible generalized least squares to deal with data that does not meet the OLS assumption of Independent and Identically Distributed (IID) error terms.
Survey data is a powerful analysis tool, providing a window into people’s thoughts, behaviors, and experiences. By collecting responses from a diverse sample of responders on a range of topics, surveys offer invaluable insights. These can help researchers, businesses, and policymakers make informed decisions and understand diverse perspectives.
In today’s blog we’ll look more closely at survey data including:
Fundamental characteristics of survey data.
Data cleaning considerations.
Data exploration using frequency tables and data visualizations.
Anyone who works with panel data knows that pivoting between long and wide form, though commonly necessary, can still be painstakingly tedious, at best. It can lead to frustrating errors, unexpected results, and lengthy troubleshooting, at worst.
The new dfLonger and dfWider procedures introduced in GAUSS 24 make great strides towards fixing that. Extensive planning has gone into each procedure, resulting in comprehensive but intuitive functions.
In today’s blog, we will walk through all you need to know about the dfLonger procedure to tackle even the most complex cases of transforming wide form panel data to long form.