This blog provides a non-technical look at impulse response functions and forecast error variance decomposition, both integral parts of vector autoregressive models.
If you’re looking to gain a better understanding of these important multivariate time series techniques you’re in the right place.
We cover the basics, including:
What is structural analysis?
What are impulse response functions?
How do we interpret impulse response functions?
What is forecast error variance decomposition?
How do we interpret forecast error variance decomposition?
In today’s blog, you’ll learn the basics of the vector autoregressive model. We lay the foundation for getting started with this crucial multivariate time series model and cover the important details including:
Categorical variables offer an important opportunity to capture qualitative effects in statistical modeling. Unfortunately, it can be tedious and cumbersome to manage categorical variables in statistical software.
The new GAUSS category type, introduced in GAUSS 21, makes it easy and intuitive to work with categorical data.
In today’s blog we use real-life housing data to explore the numerous advantages of the GAUSS category type including:
Easy set up and viewing of categorical data.
Simple renaming of category labels.
Easy changing of the reference base case and reordering of categories.
Single-line frequency plots and tables.
Internal creation of dummy variables for regressions.
Proper labeling of categories in regression output.
Categorical variables have an important role in modeling, as they offer a quantitative way to include qualitative outcomes in our models. However, it is important to know how to appropriately use them and how to appropriately interpret models that include them. In this blog, you’ll learn the fundamentals you need to know to make the most of categorical variables.
Maximum likelihood is a fundamental workhorse for estimating model parameters with applications ranging from simple linear regression to advanced discrete choice models. Today we learn how to perform maximum likelihood estimation with the GAUSS Maximum Likelihood MT library using our simple linear regression example.
We’ll show all the fundamentals you need to get started with maximum likelihood estimation in GAUSS including:
How to create a likelihood function.
How to call the maxlikmt procedure to estimate parameters.
Reliable unit root testing is an important step of any time series analysis or panel data analysis.
However, standard time series unit root tests and panel data unit root tests aren’t reliable when structural breaks are present. Because of this, when structural breaks are suspected, we must employ unit root tests that properly incorporate these breaks.
Today we will examine one of those tests, the Carrion-i-Silvestre, et al. (2005) panel data test for stationarity in the presence of multiple structural breaks.
Maximum likelihood is a widely used technique for estimation with applications in many areas including time series modeling, panel data, discrete data, and even machine learning.
In today’s blog, we cover the fundamentals of maximum likelihood including:
The basic theory of maximum likelihood.
The advantages and disadvantages of maximum likelihood estimation.
Self-assessments are a common survey tool but, they can be difficult to analyze due to bias arising from systematic variation in individual reporting styles, known as reporting heterogeneity.
Anchoring vignette questions combined with the Compound Hierarchical Ordered Probit (CHOPIT) model, allows researchers to address this issue in survey data (King et al. 2004).
This methodology is based on two key identifying assumptions:
Response consistency (RC)
Vignette equivalence (VE)
In today’s blog we look more closely the fundamental pieces of this modeling technique including the:
Typical data set up.
Hierarchical Ordered Probit Model (HOPIT).
Likelihood and identifying assumptions used for estimation.
Dummy variables are a common econometric tool, whether working with time series, cross-sectional, or panel data. Unfortunately, raw datasets rarely come formatted with dummy variables that are regression ready.
In today’s blog, we explore several options for creating dummy variables from categorical data in GAUSS, including:
Creating dummy variables from a file using formula strings.
Creating dummy variables from an existing vector of categorical data.
Creating dummy variables from an existing vector of continuous variables.
Cointegration is an important tool for modeling the long-run relationships in time series data. If you work with time series data, you will likely find yourself needing to use cointegration at some point. This blog provides an in-depth introduction to cointegration and will cover all the nuts and bolts you need to get started.