Introduction We use regression analysis to understand the relationships, patterns, and causalities in data. Often we are interested in understanding the impacts that changes in the dependent variables have on our outcome of interest. Marginal effects measure the impact that an instantaneous unit change in one variable has on the outcome variable while all other [...]

Introduction In this blog, we examine one of the fundamentals of panel data analysis, the one-way error component model. Today we will: Explain the theoretical one-way error component model. Consider fixed effects vs. random effects. Estimate models using an empirical example. The theoretical one-way error component model The one-way error-component model is a panel data [...]

Introduction When policy changes or treatments are imposed on people, it is common and reasonable to ask how those people have been impacted. This is a more difficult question than it seems at first glance. In order to truly know how those individuals have been impacted, we need to consider how those individuals would be [...]

Introduction In this blog, we extend last week's analysis of unit root testing with structural breaks to panel data. We will again use the quarterly current account to GDP ratio but focus on a panel of data from five countries: United States, United Kingdom, Australia, South Africa, and India. Using panel data unit roots tests [...]

Introduction In this blog, we examine the issue of identifying unit roots in the presence of structural breaks. We will use the quarterly US current account to GDP ratio to compare results from a number of unit root test found in the GAUSS tspdlib library including the: Zivot-Andrews (1992) unit root test with a single [...]

Hatemi code for cointegration with multiple structural breaks This week's blog brings you the second video in the series examining running publicly available GAUSS code. This video runs the popular code by Hatemi-J for testing cointegration with multiple structural breaks. In this video you will learn how to: Substitute your own dataset. Modify the [...]

Introduction Classical linear regression estimates the mean response of the dependent variable dependent on the independent variables. There are many cases, such as skewed data, multimodal data, or data with outliers, when the behavior at the conditional mean fails to fully capture the patterns in the data. In these cases, quantile regression provides a useful [...]

Introduction The bootstrap is a commonly used resampling technique which involves taking random samples with replacement to quantify uncertainty about a particular estimator or statistic. Goals In this post, we will apply the bootstrap procedure to asset returns. Our data will be annual returns from the S&P 500 and the 10 year US Treasury Bond [...]

Introduction Permutation Entropy (PE) is a robust time series tool which provides a quantification measure of the complexity of a dynamic system by capturing the order relations between values of a time series and extracting a probability distribution of the ordinal patterns (see Henry and Judge, 2018). Among its main features, the PE approach: Is [...]

Introduction Linear regression commonly assumes that the error terms of a model are independently and identically distributed (i.i.d). However, when datasets contain groups, the potential for correlated error terms within groups arises. Example: Weather shocks to apple orchards For example, consider a model of the supply of apples from various orchards across the United States. [...]