Recent Posts

Easy Management of Categorical Variables

Categorical variables offer an important opportunity to capture qualitative effects in statistical modeling. Unfortunately, it can be tedious and cumbersome to manage categorical variables in statistical software. The new GAUSS category type, introduced in GAUSS 21, makes it easy and intuitive to work with categorical data. In today’s blog we use real-life housing data to explore the numerous advantages of the GAUSS category type including:
  • Easy set up and viewing of categorical data.
  • Simple renaming of category labels.
  • Easy changing of the reference base case and reordering of categories.
  • Single-line frequency plots and tables.
  • Internal creation of dummy variables for regressions.
  • Proper labeling of categories in regression output.
Tagged in ,

Introduction to Categorical Variables

Categorical variables have an important role in modeling, as they offer a quantitative way to include qualitative outcomes in our models. However, it is important to know how to appropriately use them and how to appropriately interpret models that include them. In this blog, you’ll learn the fundamentals you need to know to make the most of categorical variables.

New Release of TSPDLIB 2.0

Learn why TSPDLIB 2.0 is the easiest and most comprehensive time series and panel data unit root and cointegration testing package on the market. The tspdlib 2.0 package includes expanded functions for time series and panel data testing in the presence of structural breaks. In addition, TSPDLIB 2.0 is easier than ever to use with new implementation of default parameter settings, updated output printing, and automatic date variable detection.

Easy and Fast Data Management in GAUSS 21

The new dataframes and interactive data management tools in GAUSS 21 will make your work more enjoyable and save you hours of time. Learn more about the latest features including:
  • New dataframes that handle strings, categories, and dates with ease.
  • Interactive data filtering.
  • Easy to manage date displays.
  • Interactive management of categorial variables.
  • Auto-generated code.
Tagged in

Maximum Likelihood Estimation in GAUSS

Maximum likelihood is a fundamental workhorse for estimating model parameters with applications ranging from simple linear regression to advanced discrete choice models. Today we learn how to perform maximum likelihood estimation with the GAUSS Maximum Likelihood MT library using our simple linear regression example. We’ll show all the fundamentals you need to get started with maximum likelihood estimation in GAUSS including:
  • How to create a likelihood function.
  • How to call the maxlikmt procedure to estimate parameters.
  • How to interpret the results from maxlikmt.
Tagged in ,

Panel Data Stationarity Test With Structural Breaks

Reliable unit root testing is an important step of any time series analysis or panel data analysis. However, standard time series unit root tests and panel data unit root tests aren’t reliable when structural breaks are present. Because of this, when structural breaks are suspected, we must employ unit root tests that properly incorporate these breaks. Today we will examine one of those tests, the Carrion-i-Silvestre, et al. (2005) panel data test for stationarity in the presence of multiple structural breaks.
Tagged in ,

Beginner's Guide To Maximum Likelihood Estimation

Maximum likelihood is a widely used technique for estimation with applications in many areas including time series modeling, panel data, discrete data, and even machine learning. In today’s blog, we cover the fundamentals of maximum likelihood including:
  1. The basic theory of maximum likelihood.
  2. The advantages and disadvantages of maximum likelihood estimation.
  3. The log-likelihood function.
  4. Modeling applications.

Anchoring Vignettes and the Compound Hierarchical Ordered Probit (CHOPIT) Model

Self-assessments are a common survey tool but, they can be difficult to analyze due to bias arising from systematic variation in individual reporting styles, known as reporting heterogeneity. Anchoring vignette questions combined with the Compound Hierarchical Ordered Probit (CHOPIT) model, allows researchers to address this issue in survey data (King et al. 2004). This methodology is based on two key identifying assumptions:
  • Response consistency (RC)
  • Vignette equivalence (VE)
In today’s blog we look more closely the fundamental pieces of this modeling technique including the:
  • Typical data set up.
  • Hierarchical Ordered Probit Model (HOPIT).
  • Anchoring vignettes.
  • Likelihood and identifying assumptions used for estimation.
Tagged in

How to Create Tiled Graphs in GAUSS

Placing graphs next to each other can be a great way to present information and improve data visualization. Today we will learn how to create tiled graphs in GAUSS with the easy-to-use plotLayout procedure.

We will work through two simple examples where you will learn:
  • How to created tiled layouts which are uniform and layouts with graphs of different sizes.
  • Which graph types can be used with plotLayout.
  • How to clear your tiled graph layouts.

Have a Specific Question?

Get a real answer from a real person

Need Support?

Get help from our friendly experts.

Try GAUSS for 14 days for FREE

See what GAUSS can do for your data

© Aptech Systems, Inc. All rights reserved.

Privacy Policy