Panel data, sometimes referred to as longitudinal data, is data that contains observations about different cross sections across time. Panel data exhibits characteristics of both cross-sectional data and time-series data. This blend of characteristics has given rise to a unique branch of time series modeling made up of methodologies specific to panel data structure. This blog offers a complete guide to those methodologies including the nature of panel data series, types of panel data, and panel data models.
The aggregate function, first available in GAUSS version 20, computes statistics within data groups. This is particularly useful for panel data. In today’s blog, we take a closer look at aggregate.
In time series modeling we often encounter trending or nonstationary time series data. Understanding the characteristics of such data is crucial for developing proper time series models. For this reason, unit root testing is an essential step when dealing with time series data. In this blog post, we cover everything you need to conduct time series data unit root tests using GAUSS.
The statistical characteristics of time series data often violate the assumptions of conventional statistical methods. Because of this, analyzing time series data requires a unique set of tools and methods, collectively known as time series analysis. This article covers the fundamental concepts of time series analysis and should give you a foundation for working with time series data. Everything is covered from time series plotting to time series modeling.
The preliminary econometric package for Time Series and Panel Data Methods has been updated and functionality has been expanded in this first official release of tspdblib 1.0. The tspdlib 1.0 package includes functions for time series unit root tests in the presence of structural breaks, time series and panel data unit root tests in the presence of structural breaks, and panel data causality tests. It is available for direct installation using the GAUSS Package Manager.
The posterior probability distribution is the heart of Bayesian statistics and a fundamental tool for Bayesian parameter estimation. Naturally, how to infer and build these distributions is a widely examined topic, the scope of which cannot fit in one blog. In this blog, we examine bayesian sampling using three basic, but fundamental techniques, importance sampling, Metropolis-Hastings sampling, and Gibbs sampling.
We use regression analysis to understand the relationships, patterns, and causalities in data. Often we are interested in understanding the impacts that changes in the dependent variables have on our outcome of interest. However, not all models provide such straightforward interpretations. Coefficients in more complex models may not always provide direct insights into the relationships we are interested in.
In this blog, we look more closely at the interpretation of marginal effects in three types of models:
Purely linear models.
Models with transformations in independent variables.
Models with transformations of dependent variables.
In this blog, we examine one of the fundamentals of panel data analysis, the one-way error component model. We cover the theoretical background of the one-way error component model, we examine the fixed-effects and random-effects models, and provide an empirical example of both.
When policy changes or treatments are imposed on people, it is common and reasonable to ask how those people have been impacted. This is a more difficult question than it seems at first glance. In today’s blog, we examine difference-in-differences (DD) estimation, a common tool for considering the impact of treatments on individuals.